00:00:00.000 Started by upstream project "autotest-per-patch" build number 132703 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.563 The recommended git tool is: git 00:00:02.564 using credential 00000000-0000-0000-0000-000000000002 00:00:02.566 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.577 Fetching changes from the remote Git repository 00:00:02.582 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.595 Using shallow fetch with depth 1 00:00:02.595 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.595 > git --version # timeout=10 00:00:02.606 > git --version # 'git version 2.39.2' 00:00:02.606 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.617 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.617 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.537 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.550 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.563 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.563 > git config core.sparsecheckout # timeout=10 00:00:08.576 > git read-tree -mu HEAD # timeout=10 00:00:08.595 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.621 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.621 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.718 [Pipeline] Start of Pipeline 00:00:08.732 [Pipeline] library 00:00:08.734 Loading library shm_lib@master 00:00:08.734 Library shm_lib@master is cached. Copying from home. 00:00:08.747 [Pipeline] node 00:00:08.755 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:08.756 [Pipeline] { 00:00:08.763 [Pipeline] catchError 00:00:08.764 [Pipeline] { 00:00:08.777 [Pipeline] wrap 00:00:08.789 [Pipeline] { 00:00:08.794 [Pipeline] stage 00:00:08.796 [Pipeline] { (Prologue) 00:00:08.808 [Pipeline] echo 00:00:08.809 Node: VM-host-SM17 00:00:08.813 [Pipeline] cleanWs 00:00:08.820 [WS-CLEANUP] Deleting project workspace... 00:00:08.820 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.826 [WS-CLEANUP] done 00:00:09.027 [Pipeline] setCustomBuildProperty 00:00:09.113 [Pipeline] httpRequest 00:00:09.502 [Pipeline] echo 00:00:09.503 Sorcerer 10.211.164.20 is alive 00:00:09.512 [Pipeline] retry 00:00:09.514 [Pipeline] { 00:00:09.528 [Pipeline] httpRequest 00:00:09.532 HttpMethod: GET 00:00:09.532 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.532 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.536 Response Code: HTTP/1.1 200 OK 00:00:09.537 Success: Status code 200 is in the accepted range: 200,404 00:00:09.537 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.005 [Pipeline] } 00:00:24.023 [Pipeline] // retry 00:00:24.032 [Pipeline] sh 00:00:24.333 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.348 [Pipeline] httpRequest 00:00:24.716 [Pipeline] echo 00:00:24.718 Sorcerer 10.211.164.20 is alive 00:00:24.728 [Pipeline] retry 00:00:24.730 [Pipeline] { 00:00:24.744 [Pipeline] httpRequest 00:00:24.749 HttpMethod: GET 00:00:24.749 URL: http://10.211.164.20/packages/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:00:24.750 Sending request to url: http://10.211.164.20/packages/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:00:24.754 Response Code: HTTP/1.1 200 OK 00:00:24.755 Success: Status code 200 is in the accepted range: 200,404 00:00:24.755 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:03:45.297 [Pipeline] } 00:03:45.315 [Pipeline] // retry 00:03:45.323 [Pipeline] sh 00:03:45.603 + tar --no-same-owner -xf spdk_98eca6fa083aaf48dc253cd326ac15e635bc4141.tar.gz 00:03:48.903 [Pipeline] sh 00:03:49.189 + git -C spdk log --oneline -n5 00:03:49.189 98eca6fa0 lib/thread: Add API to register a post poller handler 00:03:49.189 2c140f58f nvme/rdma: Support accel sequence 00:03:49.189 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:03:49.189 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:03:49.189 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:03:49.208 [Pipeline] writeFile 00:03:49.224 [Pipeline] sh 00:03:49.518 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:49.530 [Pipeline] sh 00:03:49.804 + cat autorun-spdk.conf 00:03:49.804 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:49.804 SPDK_RUN_ASAN=1 00:03:49.804 SPDK_RUN_UBSAN=1 00:03:49.804 SPDK_TEST_RAID=1 00:03:49.804 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:49.810 RUN_NIGHTLY=0 00:03:49.812 [Pipeline] } 00:03:49.825 [Pipeline] // stage 00:03:49.840 [Pipeline] stage 00:03:49.842 [Pipeline] { (Run VM) 00:03:49.854 [Pipeline] sh 00:03:50.134 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:50.134 + echo 'Start stage prepare_nvme.sh' 00:03:50.134 Start stage prepare_nvme.sh 00:03:50.134 + [[ -n 0 ]] 00:03:50.134 + disk_prefix=ex0 00:03:50.134 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:03:50.134 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:03:50.134 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:03:50.134 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:50.134 ++ SPDK_RUN_ASAN=1 00:03:50.134 ++ SPDK_RUN_UBSAN=1 00:03:50.134 ++ SPDK_TEST_RAID=1 00:03:50.134 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:50.134 ++ RUN_NIGHTLY=0 00:03:50.134 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:03:50.134 + nvme_files=() 00:03:50.134 + declare -A nvme_files 00:03:50.134 + backend_dir=/var/lib/libvirt/images/backends 00:03:50.134 + nvme_files['nvme.img']=5G 00:03:50.134 + nvme_files['nvme-cmb.img']=5G 00:03:50.134 + nvme_files['nvme-multi0.img']=4G 00:03:50.134 + nvme_files['nvme-multi1.img']=4G 00:03:50.134 + nvme_files['nvme-multi2.img']=4G 00:03:50.134 + nvme_files['nvme-openstack.img']=8G 00:03:50.134 + nvme_files['nvme-zns.img']=5G 00:03:50.134 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:50.134 + (( SPDK_TEST_FTL == 1 )) 00:03:50.134 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:50.134 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:50.134 + for nvme in "${!nvme_files[@]}" 00:03:50.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:03:50.134 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:50.134 + for nvme in "${!nvme_files[@]}" 00:03:50.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:03:50.134 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:50.134 + for nvme in "${!nvme_files[@]}" 00:03:50.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:03:50.134 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:50.134 + for nvme in "${!nvme_files[@]}" 00:03:50.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:03:50.134 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:50.134 + for nvme in "${!nvme_files[@]}" 00:03:50.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:03:50.393 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:50.393 + for nvme in "${!nvme_files[@]}" 00:03:50.393 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:03:50.393 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:50.393 + for nvme in "${!nvme_files[@]}" 00:03:50.393 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:03:50.393 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:50.393 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:03:50.393 + echo 'End stage prepare_nvme.sh' 00:03:50.393 End stage prepare_nvme.sh 00:03:50.404 [Pipeline] sh 00:03:50.685 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:50.685 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:03:50.685 00:03:50.685 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:03:50.685 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:03:50.685 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:03:50.685 HELP=0 00:03:50.685 DRY_RUN=0 00:03:50.685 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:03:50.685 NVME_DISKS_TYPE=nvme,nvme, 00:03:50.685 NVME_AUTO_CREATE=0 00:03:50.685 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:03:50.685 NVME_CMB=,, 00:03:50.685 NVME_PMR=,, 00:03:50.685 NVME_ZNS=,, 00:03:50.685 NVME_MS=,, 00:03:50.685 NVME_FDP=,, 00:03:50.685 SPDK_VAGRANT_DISTRO=fedora39 00:03:50.685 SPDK_VAGRANT_VMCPU=10 00:03:50.685 SPDK_VAGRANT_VMRAM=12288 00:03:50.685 SPDK_VAGRANT_PROVIDER=libvirt 00:03:50.685 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:50.685 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:50.685 SPDK_OPENSTACK_NETWORK=0 00:03:50.685 VAGRANT_PACKAGE_BOX=0 00:03:50.685 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:50.685 FORCE_DISTRO=true 00:03:50.685 VAGRANT_BOX_VERSION= 00:03:50.685 EXTRA_VAGRANTFILES= 00:03:50.685 NIC_MODEL=e1000 00:03:50.685 00:03:50.685 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:03:50.685 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:03:53.973 Bringing machine 'default' up with 'libvirt' provider... 00:03:54.541 ==> default: Creating image (snapshot of base box volume). 00:03:54.800 ==> default: Creating domain with the following settings... 00:03:54.800 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733426627_7f635ee9878bc513dfac 00:03:54.801 ==> default: -- Domain type: kvm 00:03:54.801 ==> default: -- Cpus: 10 00:03:54.801 ==> default: -- Feature: acpi 00:03:54.801 ==> default: -- Feature: apic 00:03:54.801 ==> default: -- Feature: pae 00:03:54.801 ==> default: -- Memory: 12288M 00:03:54.801 ==> default: -- Memory Backing: hugepages: 00:03:54.801 ==> default: -- Management MAC: 00:03:54.801 ==> default: -- Loader: 00:03:54.801 ==> default: -- Nvram: 00:03:54.801 ==> default: -- Base box: spdk/fedora39 00:03:54.801 ==> default: -- Storage pool: default 00:03:54.801 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733426627_7f635ee9878bc513dfac.img (20G) 00:03:54.801 ==> default: -- Volume Cache: default 00:03:54.801 ==> default: -- Kernel: 00:03:54.801 ==> default: -- Initrd: 00:03:54.801 ==> default: -- Graphics Type: vnc 00:03:54.801 ==> default: -- Graphics Port: -1 00:03:54.801 ==> default: -- Graphics IP: 127.0.0.1 00:03:54.801 ==> default: -- Graphics Password: Not defined 00:03:54.801 ==> default: -- Video Type: cirrus 00:03:54.801 ==> default: -- Video VRAM: 9216 00:03:54.801 ==> default: -- Sound Type: 00:03:54.801 ==> default: -- Keymap: en-us 00:03:54.801 ==> default: -- TPM Path: 00:03:54.801 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:54.801 ==> default: -- Command line args: 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:54.801 ==> default: -> value=-drive, 00:03:54.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:54.801 ==> default: -> value=-drive, 00:03:54.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.801 ==> default: -> value=-drive, 00:03:54.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.801 ==> default: -> value=-drive, 00:03:54.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:54.801 ==> default: -> value=-device, 00:03:54.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.801 ==> default: Creating shared folders metadata... 00:03:54.801 ==> default: Starting domain. 00:03:56.736 ==> default: Waiting for domain to get an IP address... 00:04:14.815 ==> default: Waiting for SSH to become available... 00:04:14.815 ==> default: Configuring and enabling network interfaces... 00:04:19.004 default: SSH address: 192.168.121.213:22 00:04:19.004 default: SSH username: vagrant 00:04:19.004 default: SSH auth method: private key 00:04:20.910 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:29.039 ==> default: Mounting SSHFS shared folder... 00:04:30.414 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:30.414 ==> default: Checking Mount.. 00:04:31.402 ==> default: Folder Successfully Mounted! 00:04:31.402 ==> default: Running provisioner: file... 00:04:32.344 default: ~/.gitconfig => .gitconfig 00:04:32.908 00:04:32.908 SUCCESS! 00:04:32.908 00:04:32.908 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:04:32.908 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:32.908 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:04:32.908 00:04:32.917 [Pipeline] } 00:04:32.934 [Pipeline] // stage 00:04:32.945 [Pipeline] dir 00:04:32.946 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:04:32.948 [Pipeline] { 00:04:32.957 [Pipeline] catchError 00:04:32.959 [Pipeline] { 00:04:32.970 [Pipeline] sh 00:04:33.253 + vagrant ssh-config --host vagrant 00:04:33.253 + sed -ne /^Host/,$p 00:04:33.253 + tee ssh_conf 00:04:37.437 Host vagrant 00:04:37.437 HostName 192.168.121.213 00:04:37.437 User vagrant 00:04:37.437 Port 22 00:04:37.437 UserKnownHostsFile /dev/null 00:04:37.437 StrictHostKeyChecking no 00:04:37.437 PasswordAuthentication no 00:04:37.437 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:37.437 IdentitiesOnly yes 00:04:37.437 LogLevel FATAL 00:04:37.437 ForwardAgent yes 00:04:37.437 ForwardX11 yes 00:04:37.437 00:04:37.451 [Pipeline] withEnv 00:04:37.454 [Pipeline] { 00:04:37.469 [Pipeline] sh 00:04:37.814 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:37.814 source /etc/os-release 00:04:37.814 [[ -e /image.version ]] && img=$(< /image.version) 00:04:37.814 # Minimal, systemd-like check. 00:04:37.814 if [[ -e /.dockerenv ]]; then 00:04:37.814 # Clear garbage from the node's name: 00:04:37.814 # agt-er_autotest_547-896 -> autotest_547-896 00:04:37.814 # $HOSTNAME is the actual container id 00:04:37.814 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:37.814 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:37.814 # We can assume this is a mount from a host where container is running, 00:04:37.814 # so fetch its hostname to easily identify the target swarm worker. 00:04:37.814 container="$(< /etc/hostname) ($agent)" 00:04:37.814 else 00:04:37.814 # Fallback 00:04:37.814 container=$agent 00:04:37.814 fi 00:04:37.814 fi 00:04:37.814 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:37.814 00:04:38.084 [Pipeline] } 00:04:38.105 [Pipeline] // withEnv 00:04:38.113 [Pipeline] setCustomBuildProperty 00:04:38.128 [Pipeline] stage 00:04:38.131 [Pipeline] { (Tests) 00:04:38.149 [Pipeline] sh 00:04:38.452 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:38.466 [Pipeline] sh 00:04:38.745 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:39.022 [Pipeline] timeout 00:04:39.022 Timeout set to expire in 1 hr 30 min 00:04:39.025 [Pipeline] { 00:04:39.040 [Pipeline] sh 00:04:39.321 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:39.891 HEAD is now at 98eca6fa0 lib/thread: Add API to register a post poller handler 00:04:39.905 [Pipeline] sh 00:04:40.189 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:40.462 [Pipeline] sh 00:04:40.794 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:41.067 [Pipeline] sh 00:04:41.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:41.347 ++ readlink -f spdk_repo 00:04:41.605 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:41.605 + [[ -n /home/vagrant/spdk_repo ]] 00:04:41.605 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:41.605 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:41.605 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:41.605 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:41.605 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:41.605 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:41.605 + cd /home/vagrant/spdk_repo 00:04:41.605 + source /etc/os-release 00:04:41.605 ++ NAME='Fedora Linux' 00:04:41.605 ++ VERSION='39 (Cloud Edition)' 00:04:41.605 ++ ID=fedora 00:04:41.605 ++ VERSION_ID=39 00:04:41.605 ++ VERSION_CODENAME= 00:04:41.605 ++ PLATFORM_ID=platform:f39 00:04:41.605 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:41.605 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:41.605 ++ LOGO=fedora-logo-icon 00:04:41.605 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:41.605 ++ HOME_URL=https://fedoraproject.org/ 00:04:41.605 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:41.605 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:41.605 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:41.605 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:41.605 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:41.605 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:41.605 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:41.605 ++ SUPPORT_END=2024-11-12 00:04:41.605 ++ VARIANT='Cloud Edition' 00:04:41.605 ++ VARIANT_ID=cloud 00:04:41.605 + uname -a 00:04:41.605 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:41.605 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.120 Hugepages 00:04:42.120 node hugesize free / total 00:04:42.120 node0 1048576kB 0 / 0 00:04:42.120 node0 2048kB 0 / 0 00:04:42.120 00:04:42.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.120 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.120 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:42.120 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:42.120 + rm -f /tmp/spdk-ld-path 00:04:42.120 + source autorun-spdk.conf 00:04:42.120 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.120 ++ SPDK_RUN_ASAN=1 00:04:42.120 ++ SPDK_RUN_UBSAN=1 00:04:42.120 ++ SPDK_TEST_RAID=1 00:04:42.120 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:42.120 ++ RUN_NIGHTLY=0 00:04:42.120 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:42.120 + [[ -n '' ]] 00:04:42.120 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:42.120 + for M in /var/spdk/build-*-manifest.txt 00:04:42.120 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:42.120 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.120 + for M in /var/spdk/build-*-manifest.txt 00:04:42.120 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:42.120 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.120 + for M in /var/spdk/build-*-manifest.txt 00:04:42.120 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:42.120 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.120 ++ uname 00:04:42.120 + [[ Linux == \L\i\n\u\x ]] 00:04:42.120 + sudo dmesg -T 00:04:42.120 + sudo dmesg --clear 00:04:42.120 + dmesg_pid=5208 00:04:42.120 + sudo dmesg -Tw 00:04:42.120 + [[ Fedora Linux == FreeBSD ]] 00:04:42.120 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.120 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.120 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:42.120 + [[ -x /usr/src/fio-static/fio ]] 00:04:42.120 + export FIO_BIN=/usr/src/fio-static/fio 00:04:42.120 + FIO_BIN=/usr/src/fio-static/fio 00:04:42.120 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:42.120 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:42.120 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:42.120 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.120 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.120 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:42.120 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.120 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.120 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.378 19:24:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:42.378 19:24:35 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:42.378 19:24:35 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:42.378 19:24:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:42.378 19:24:35 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.378 19:24:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:42.378 19:24:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.378 19:24:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:42.378 19:24:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:42.378 19:24:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.378 19:24:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.378 19:24:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.378 19:24:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.378 19:24:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.379 19:24:35 -- paths/export.sh@5 -- $ export PATH 00:04:42.379 19:24:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.379 19:24:35 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.379 19:24:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:42.379 19:24:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426675.XXXXXX 00:04:42.379 19:24:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426675.6MqFiZ 00:04:42.379 19:24:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:42.379 19:24:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:42.379 19:24:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:42.379 19:24:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:42.379 19:24:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:42.379 19:24:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:42.379 19:24:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:42.379 19:24:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.379 19:24:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:42.379 19:24:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:42.379 19:24:35 -- pm/common@17 -- $ local monitor 00:04:42.379 19:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.379 19:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.379 19:24:35 -- pm/common@25 -- $ sleep 1 00:04:42.379 19:24:35 -- pm/common@21 -- $ date +%s 00:04:42.379 19:24:35 -- pm/common@21 -- $ date +%s 00:04:42.379 19:24:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426675 00:04:42.379 19:24:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426675 00:04:42.379 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426675_collect-vmstat.pm.log 00:04:42.379 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426675_collect-cpu-load.pm.log 00:04:43.315 19:24:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:43.315 19:24:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:43.315 19:24:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:43.315 19:24:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:43.315 19:24:36 -- spdk/autobuild.sh@16 -- $ date -u 00:04:43.315 Thu Dec 5 07:24:36 PM UTC 2024 00:04:43.315 19:24:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:43.315 v25.01-pre-298-g98eca6fa0 00:04:43.315 19:24:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:43.315 19:24:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:43.315 19:24:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:43.315 19:24:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:43.315 19:24:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:43.315 ************************************ 00:04:43.315 START TEST asan 00:04:43.315 ************************************ 00:04:43.315 using asan 00:04:43.315 19:24:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:43.315 00:04:43.315 real 0m0.000s 00:04:43.315 user 0m0.000s 00:04:43.315 sys 0m0.000s 00:04:43.315 19:24:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:43.315 ************************************ 00:04:43.315 END TEST asan 00:04:43.315 ************************************ 00:04:43.315 19:24:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:43.315 19:24:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:43.315 19:24:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:43.315 19:24:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:43.315 19:24:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:43.315 19:24:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:43.574 ************************************ 00:04:43.574 START TEST ubsan 00:04:43.574 ************************************ 00:04:43.574 using ubsan 00:04:43.574 19:24:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:43.574 00:04:43.574 real 0m0.000s 00:04:43.574 user 0m0.000s 00:04:43.574 sys 0m0.000s 00:04:43.574 19:24:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:43.574 19:24:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:43.574 ************************************ 00:04:43.574 END TEST ubsan 00:04:43.574 ************************************ 00:04:43.574 19:24:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:43.574 19:24:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:43.574 19:24:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:43.574 19:24:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:43.574 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:43.574 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:44.141 Using 'verbs' RDMA provider 00:04:59.957 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:12.235 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:12.235 Creating mk/config.mk...done. 00:05:12.235 Creating mk/cc.flags.mk...done. 00:05:12.235 Type 'make' to build. 00:05:12.235 19:25:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:12.235 19:25:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:12.235 19:25:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:12.235 19:25:05 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.235 ************************************ 00:05:12.235 START TEST make 00:05:12.235 ************************************ 00:05:12.235 19:25:05 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:12.235 make[1]: Nothing to be done for 'all'. 00:05:27.149 The Meson build system 00:05:27.149 Version: 1.5.0 00:05:27.149 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:27.149 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:27.149 Build type: native build 00:05:27.149 Program cat found: YES (/usr/bin/cat) 00:05:27.149 Project name: DPDK 00:05:27.149 Project version: 24.03.0 00:05:27.149 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:27.149 C linker for the host machine: cc ld.bfd 2.40-14 00:05:27.149 Host machine cpu family: x86_64 00:05:27.149 Host machine cpu: x86_64 00:05:27.149 Message: ## Building in Developer Mode ## 00:05:27.149 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:27.149 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:27.149 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:27.149 Program python3 found: YES (/usr/bin/python3) 00:05:27.149 Program cat found: YES (/usr/bin/cat) 00:05:27.149 Compiler for C supports arguments -march=native: YES 00:05:27.149 Checking for size of "void *" : 8 00:05:27.149 Checking for size of "void *" : 8 (cached) 00:05:27.149 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:27.149 Library m found: YES 00:05:27.149 Library numa found: YES 00:05:27.149 Has header "numaif.h" : YES 00:05:27.149 Library fdt found: NO 00:05:27.149 Library execinfo found: NO 00:05:27.149 Has header "execinfo.h" : YES 00:05:27.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:27.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:27.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:27.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:27.149 Run-time dependency openssl found: YES 3.1.1 00:05:27.149 Run-time dependency libpcap found: YES 1.10.4 00:05:27.149 Has header "pcap.h" with dependency libpcap: YES 00:05:27.149 Compiler for C supports arguments -Wcast-qual: YES 00:05:27.149 Compiler for C supports arguments -Wdeprecated: YES 00:05:27.149 Compiler for C supports arguments -Wformat: YES 00:05:27.149 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:27.149 Compiler for C supports arguments -Wformat-security: NO 00:05:27.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:27.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:27.149 Compiler for C supports arguments -Wnested-externs: YES 00:05:27.149 Compiler for C supports arguments -Wold-style-definition: YES 00:05:27.149 Compiler for C supports arguments -Wpointer-arith: YES 00:05:27.149 Compiler for C supports arguments -Wsign-compare: YES 00:05:27.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:27.149 Compiler for C supports arguments -Wundef: YES 00:05:27.149 Compiler for C supports arguments -Wwrite-strings: YES 00:05:27.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:27.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:27.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:27.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:27.149 Program objdump found: YES (/usr/bin/objdump) 00:05:27.149 Compiler for C supports arguments -mavx512f: YES 00:05:27.149 Checking if "AVX512 checking" compiles: YES 00:05:27.149 Fetching value of define "__SSE4_2__" : 1 00:05:27.149 Fetching value of define "__AES__" : 1 00:05:27.149 Fetching value of define "__AVX__" : 1 00:05:27.149 Fetching value of define "__AVX2__" : 1 00:05:27.149 Fetching value of define "__AVX512BW__" : (undefined) 00:05:27.149 Fetching value of define "__AVX512CD__" : (undefined) 00:05:27.149 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:27.149 Fetching value of define "__AVX512F__" : (undefined) 00:05:27.149 Fetching value of define "__AVX512VL__" : (undefined) 00:05:27.149 Fetching value of define "__PCLMUL__" : 1 00:05:27.149 Fetching value of define "__RDRND__" : 1 00:05:27.149 Fetching value of define "__RDSEED__" : 1 00:05:27.149 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:27.149 Fetching value of define "__znver1__" : (undefined) 00:05:27.149 Fetching value of define "__znver2__" : (undefined) 00:05:27.149 Fetching value of define "__znver3__" : (undefined) 00:05:27.149 Fetching value of define "__znver4__" : (undefined) 00:05:27.149 Library asan found: YES 00:05:27.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:27.149 Message: lib/log: Defining dependency "log" 00:05:27.149 Message: lib/kvargs: Defining dependency "kvargs" 00:05:27.149 Message: lib/telemetry: Defining dependency "telemetry" 00:05:27.149 Library rt found: YES 00:05:27.149 Checking for function "getentropy" : NO 00:05:27.149 Message: lib/eal: Defining dependency "eal" 00:05:27.149 Message: lib/ring: Defining dependency "ring" 00:05:27.149 Message: lib/rcu: Defining dependency "rcu" 00:05:27.149 Message: lib/mempool: Defining dependency "mempool" 00:05:27.149 Message: lib/mbuf: Defining dependency "mbuf" 00:05:27.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:27.149 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:27.149 Compiler for C supports arguments -mpclmul: YES 00:05:27.149 Compiler for C supports arguments -maes: YES 00:05:27.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:27.149 Compiler for C supports arguments -mavx512bw: YES 00:05:27.149 Compiler for C supports arguments -mavx512dq: YES 00:05:27.149 Compiler for C supports arguments -mavx512vl: YES 00:05:27.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:27.149 Compiler for C supports arguments -mavx2: YES 00:05:27.149 Compiler for C supports arguments -mavx: YES 00:05:27.149 Message: lib/net: Defining dependency "net" 00:05:27.149 Message: lib/meter: Defining dependency "meter" 00:05:27.149 Message: lib/ethdev: Defining dependency "ethdev" 00:05:27.149 Message: lib/pci: Defining dependency "pci" 00:05:27.149 Message: lib/cmdline: Defining dependency "cmdline" 00:05:27.149 Message: lib/hash: Defining dependency "hash" 00:05:27.149 Message: lib/timer: Defining dependency "timer" 00:05:27.149 Message: lib/compressdev: Defining dependency "compressdev" 00:05:27.149 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:27.149 Message: lib/dmadev: Defining dependency "dmadev" 00:05:27.149 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:27.149 Message: lib/power: Defining dependency "power" 00:05:27.149 Message: lib/reorder: Defining dependency "reorder" 00:05:27.149 Message: lib/security: Defining dependency "security" 00:05:27.149 Has header "linux/userfaultfd.h" : YES 00:05:27.149 Has header "linux/vduse.h" : YES 00:05:27.149 Message: lib/vhost: Defining dependency "vhost" 00:05:27.149 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:27.149 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:27.149 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:27.149 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:27.149 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:27.149 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:27.149 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:27.149 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:27.149 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:27.149 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:27.149 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:27.149 Configuring doxy-api-html.conf using configuration 00:05:27.149 Configuring doxy-api-man.conf using configuration 00:05:27.149 Program mandb found: YES (/usr/bin/mandb) 00:05:27.150 Program sphinx-build found: NO 00:05:27.150 Configuring rte_build_config.h using configuration 00:05:27.150 Message: 00:05:27.150 ================= 00:05:27.150 Applications Enabled 00:05:27.150 ================= 00:05:27.150 00:05:27.150 apps: 00:05:27.150 00:05:27.150 00:05:27.150 Message: 00:05:27.150 ================= 00:05:27.150 Libraries Enabled 00:05:27.150 ================= 00:05:27.150 00:05:27.150 libs: 00:05:27.150 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:27.150 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:27.150 cryptodev, dmadev, power, reorder, security, vhost, 00:05:27.150 00:05:27.150 Message: 00:05:27.150 =============== 00:05:27.150 Drivers Enabled 00:05:27.150 =============== 00:05:27.150 00:05:27.150 common: 00:05:27.150 00:05:27.150 bus: 00:05:27.150 pci, vdev, 00:05:27.150 mempool: 00:05:27.150 ring, 00:05:27.150 dma: 00:05:27.150 00:05:27.150 net: 00:05:27.150 00:05:27.150 crypto: 00:05:27.150 00:05:27.150 compress: 00:05:27.150 00:05:27.150 vdpa: 00:05:27.150 00:05:27.150 00:05:27.150 Message: 00:05:27.150 ================= 00:05:27.150 Content Skipped 00:05:27.150 ================= 00:05:27.150 00:05:27.150 apps: 00:05:27.150 dumpcap: explicitly disabled via build config 00:05:27.150 graph: explicitly disabled via build config 00:05:27.150 pdump: explicitly disabled via build config 00:05:27.150 proc-info: explicitly disabled via build config 00:05:27.150 test-acl: explicitly disabled via build config 00:05:27.150 test-bbdev: explicitly disabled via build config 00:05:27.150 test-cmdline: explicitly disabled via build config 00:05:27.150 test-compress-perf: explicitly disabled via build config 00:05:27.150 test-crypto-perf: explicitly disabled via build config 00:05:27.150 test-dma-perf: explicitly disabled via build config 00:05:27.150 test-eventdev: explicitly disabled via build config 00:05:27.150 test-fib: explicitly disabled via build config 00:05:27.150 test-flow-perf: explicitly disabled via build config 00:05:27.150 test-gpudev: explicitly disabled via build config 00:05:27.150 test-mldev: explicitly disabled via build config 00:05:27.150 test-pipeline: explicitly disabled via build config 00:05:27.150 test-pmd: explicitly disabled via build config 00:05:27.150 test-regex: explicitly disabled via build config 00:05:27.150 test-sad: explicitly disabled via build config 00:05:27.150 test-security-perf: explicitly disabled via build config 00:05:27.150 00:05:27.150 libs: 00:05:27.150 argparse: explicitly disabled via build config 00:05:27.150 metrics: explicitly disabled via build config 00:05:27.150 acl: explicitly disabled via build config 00:05:27.150 bbdev: explicitly disabled via build config 00:05:27.150 bitratestats: explicitly disabled via build config 00:05:27.150 bpf: explicitly disabled via build config 00:05:27.150 cfgfile: explicitly disabled via build config 00:05:27.150 distributor: explicitly disabled via build config 00:05:27.150 efd: explicitly disabled via build config 00:05:27.150 eventdev: explicitly disabled via build config 00:05:27.150 dispatcher: explicitly disabled via build config 00:05:27.150 gpudev: explicitly disabled via build config 00:05:27.150 gro: explicitly disabled via build config 00:05:27.150 gso: explicitly disabled via build config 00:05:27.150 ip_frag: explicitly disabled via build config 00:05:27.150 jobstats: explicitly disabled via build config 00:05:27.150 latencystats: explicitly disabled via build config 00:05:27.150 lpm: explicitly disabled via build config 00:05:27.150 member: explicitly disabled via build config 00:05:27.150 pcapng: explicitly disabled via build config 00:05:27.150 rawdev: explicitly disabled via build config 00:05:27.150 regexdev: explicitly disabled via build config 00:05:27.150 mldev: explicitly disabled via build config 00:05:27.150 rib: explicitly disabled via build config 00:05:27.150 sched: explicitly disabled via build config 00:05:27.150 stack: explicitly disabled via build config 00:05:27.150 ipsec: explicitly disabled via build config 00:05:27.150 pdcp: explicitly disabled via build config 00:05:27.150 fib: explicitly disabled via build config 00:05:27.150 port: explicitly disabled via build config 00:05:27.150 pdump: explicitly disabled via build config 00:05:27.150 table: explicitly disabled via build config 00:05:27.150 pipeline: explicitly disabled via build config 00:05:27.150 graph: explicitly disabled via build config 00:05:27.150 node: explicitly disabled via build config 00:05:27.150 00:05:27.150 drivers: 00:05:27.150 common/cpt: not in enabled drivers build config 00:05:27.150 common/dpaax: not in enabled drivers build config 00:05:27.150 common/iavf: not in enabled drivers build config 00:05:27.150 common/idpf: not in enabled drivers build config 00:05:27.150 common/ionic: not in enabled drivers build config 00:05:27.150 common/mvep: not in enabled drivers build config 00:05:27.150 common/octeontx: not in enabled drivers build config 00:05:27.150 bus/auxiliary: not in enabled drivers build config 00:05:27.150 bus/cdx: not in enabled drivers build config 00:05:27.150 bus/dpaa: not in enabled drivers build config 00:05:27.150 bus/fslmc: not in enabled drivers build config 00:05:27.150 bus/ifpga: not in enabled drivers build config 00:05:27.150 bus/platform: not in enabled drivers build config 00:05:27.150 bus/uacce: not in enabled drivers build config 00:05:27.150 bus/vmbus: not in enabled drivers build config 00:05:27.150 common/cnxk: not in enabled drivers build config 00:05:27.150 common/mlx5: not in enabled drivers build config 00:05:27.150 common/nfp: not in enabled drivers build config 00:05:27.150 common/nitrox: not in enabled drivers build config 00:05:27.150 common/qat: not in enabled drivers build config 00:05:27.150 common/sfc_efx: not in enabled drivers build config 00:05:27.150 mempool/bucket: not in enabled drivers build config 00:05:27.150 mempool/cnxk: not in enabled drivers build config 00:05:27.150 mempool/dpaa: not in enabled drivers build config 00:05:27.150 mempool/dpaa2: not in enabled drivers build config 00:05:27.150 mempool/octeontx: not in enabled drivers build config 00:05:27.150 mempool/stack: not in enabled drivers build config 00:05:27.150 dma/cnxk: not in enabled drivers build config 00:05:27.150 dma/dpaa: not in enabled drivers build config 00:05:27.150 dma/dpaa2: not in enabled drivers build config 00:05:27.150 dma/hisilicon: not in enabled drivers build config 00:05:27.150 dma/idxd: not in enabled drivers build config 00:05:27.150 dma/ioat: not in enabled drivers build config 00:05:27.150 dma/skeleton: not in enabled drivers build config 00:05:27.150 net/af_packet: not in enabled drivers build config 00:05:27.150 net/af_xdp: not in enabled drivers build config 00:05:27.150 net/ark: not in enabled drivers build config 00:05:27.150 net/atlantic: not in enabled drivers build config 00:05:27.150 net/avp: not in enabled drivers build config 00:05:27.150 net/axgbe: not in enabled drivers build config 00:05:27.150 net/bnx2x: not in enabled drivers build config 00:05:27.150 net/bnxt: not in enabled drivers build config 00:05:27.150 net/bonding: not in enabled drivers build config 00:05:27.150 net/cnxk: not in enabled drivers build config 00:05:27.150 net/cpfl: not in enabled drivers build config 00:05:27.150 net/cxgbe: not in enabled drivers build config 00:05:27.150 net/dpaa: not in enabled drivers build config 00:05:27.150 net/dpaa2: not in enabled drivers build config 00:05:27.150 net/e1000: not in enabled drivers build config 00:05:27.150 net/ena: not in enabled drivers build config 00:05:27.150 net/enetc: not in enabled drivers build config 00:05:27.150 net/enetfec: not in enabled drivers build config 00:05:27.150 net/enic: not in enabled drivers build config 00:05:27.150 net/failsafe: not in enabled drivers build config 00:05:27.150 net/fm10k: not in enabled drivers build config 00:05:27.150 net/gve: not in enabled drivers build config 00:05:27.150 net/hinic: not in enabled drivers build config 00:05:27.150 net/hns3: not in enabled drivers build config 00:05:27.150 net/i40e: not in enabled drivers build config 00:05:27.150 net/iavf: not in enabled drivers build config 00:05:27.150 net/ice: not in enabled drivers build config 00:05:27.150 net/idpf: not in enabled drivers build config 00:05:27.150 net/igc: not in enabled drivers build config 00:05:27.150 net/ionic: not in enabled drivers build config 00:05:27.150 net/ipn3ke: not in enabled drivers build config 00:05:27.150 net/ixgbe: not in enabled drivers build config 00:05:27.150 net/mana: not in enabled drivers build config 00:05:27.150 net/memif: not in enabled drivers build config 00:05:27.150 net/mlx4: not in enabled drivers build config 00:05:27.150 net/mlx5: not in enabled drivers build config 00:05:27.150 net/mvneta: not in enabled drivers build config 00:05:27.150 net/mvpp2: not in enabled drivers build config 00:05:27.150 net/netvsc: not in enabled drivers build config 00:05:27.150 net/nfb: not in enabled drivers build config 00:05:27.150 net/nfp: not in enabled drivers build config 00:05:27.150 net/ngbe: not in enabled drivers build config 00:05:27.150 net/null: not in enabled drivers build config 00:05:27.150 net/octeontx: not in enabled drivers build config 00:05:27.150 net/octeon_ep: not in enabled drivers build config 00:05:27.150 net/pcap: not in enabled drivers build config 00:05:27.150 net/pfe: not in enabled drivers build config 00:05:27.150 net/qede: not in enabled drivers build config 00:05:27.150 net/ring: not in enabled drivers build config 00:05:27.150 net/sfc: not in enabled drivers build config 00:05:27.150 net/softnic: not in enabled drivers build config 00:05:27.150 net/tap: not in enabled drivers build config 00:05:27.150 net/thunderx: not in enabled drivers build config 00:05:27.150 net/txgbe: not in enabled drivers build config 00:05:27.150 net/vdev_netvsc: not in enabled drivers build config 00:05:27.150 net/vhost: not in enabled drivers build config 00:05:27.150 net/virtio: not in enabled drivers build config 00:05:27.150 net/vmxnet3: not in enabled drivers build config 00:05:27.150 raw/*: missing internal dependency, "rawdev" 00:05:27.150 crypto/armv8: not in enabled drivers build config 00:05:27.150 crypto/bcmfs: not in enabled drivers build config 00:05:27.150 crypto/caam_jr: not in enabled drivers build config 00:05:27.150 crypto/ccp: not in enabled drivers build config 00:05:27.150 crypto/cnxk: not in enabled drivers build config 00:05:27.150 crypto/dpaa_sec: not in enabled drivers build config 00:05:27.150 crypto/dpaa2_sec: not in enabled drivers build config 00:05:27.150 crypto/ipsec_mb: not in enabled drivers build config 00:05:27.151 crypto/mlx5: not in enabled drivers build config 00:05:27.151 crypto/mvsam: not in enabled drivers build config 00:05:27.151 crypto/nitrox: not in enabled drivers build config 00:05:27.151 crypto/null: not in enabled drivers build config 00:05:27.151 crypto/octeontx: not in enabled drivers build config 00:05:27.151 crypto/openssl: not in enabled drivers build config 00:05:27.151 crypto/scheduler: not in enabled drivers build config 00:05:27.151 crypto/uadk: not in enabled drivers build config 00:05:27.151 crypto/virtio: not in enabled drivers build config 00:05:27.151 compress/isal: not in enabled drivers build config 00:05:27.151 compress/mlx5: not in enabled drivers build config 00:05:27.151 compress/nitrox: not in enabled drivers build config 00:05:27.151 compress/octeontx: not in enabled drivers build config 00:05:27.151 compress/zlib: not in enabled drivers build config 00:05:27.151 regex/*: missing internal dependency, "regexdev" 00:05:27.151 ml/*: missing internal dependency, "mldev" 00:05:27.151 vdpa/ifc: not in enabled drivers build config 00:05:27.151 vdpa/mlx5: not in enabled drivers build config 00:05:27.151 vdpa/nfp: not in enabled drivers build config 00:05:27.151 vdpa/sfc: not in enabled drivers build config 00:05:27.151 event/*: missing internal dependency, "eventdev" 00:05:27.151 baseband/*: missing internal dependency, "bbdev" 00:05:27.151 gpu/*: missing internal dependency, "gpudev" 00:05:27.151 00:05:27.151 00:05:27.151 Build targets in project: 85 00:05:27.151 00:05:27.151 DPDK 24.03.0 00:05:27.151 00:05:27.151 User defined options 00:05:27.151 buildtype : debug 00:05:27.151 default_library : shared 00:05:27.151 libdir : lib 00:05:27.151 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:27.151 b_sanitize : address 00:05:27.151 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:27.151 c_link_args : 00:05:27.151 cpu_instruction_set: native 00:05:27.151 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:27.151 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:27.151 enable_docs : false 00:05:27.151 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:27.151 enable_kmods : false 00:05:27.151 max_lcores : 128 00:05:27.151 tests : false 00:05:27.151 00:05:27.151 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:27.151 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:27.151 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:27.151 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:27.151 [3/268] Linking static target lib/librte_kvargs.a 00:05:27.151 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:27.151 [5/268] Linking static target lib/librte_log.a 00:05:27.151 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:27.151 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.151 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:27.151 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:27.151 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:27.151 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:27.151 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:27.151 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:27.151 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:27.151 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:27.151 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.151 [17/268] Linking static target lib/librte_telemetry.a 00:05:27.151 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:27.408 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:27.408 [20/268] Linking target lib/librte_log.so.24.1 00:05:27.665 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:27.665 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:27.665 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:27.923 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:27.923 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:27.923 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:27.923 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:28.180 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:28.180 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:28.180 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:28.180 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:28.180 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.438 [33/268] Linking target lib/librte_telemetry.so.24.1 00:05:28.438 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:28.736 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:28.736 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:28.736 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:28.736 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:28.736 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:28.994 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:28.994 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:28.994 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:28.994 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:29.253 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:29.253 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:29.253 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:29.512 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:29.770 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:29.770 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:29.770 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:30.028 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:30.028 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:30.028 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:30.286 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:30.286 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:30.286 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:30.286 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:30.544 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:30.544 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:30.544 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:30.544 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:30.803 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:30.803 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:30.803 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:31.062 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:31.062 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:31.062 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:31.062 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:31.320 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:31.578 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:31.578 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:31.578 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:31.835 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:31.835 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:31.835 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:31.835 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:31.835 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:31.835 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:32.092 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:32.092 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:32.092 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:32.349 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:32.349 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:32.607 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:32.865 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:32.865 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:32.865 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:32.865 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:32.865 [89/268] Linking static target lib/librte_ring.a 00:05:32.865 [90/268] Linking static target lib/librte_eal.a 00:05:32.865 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:32.865 [92/268] Linking static target lib/librte_mempool.a 00:05:33.123 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:33.123 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:33.381 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:33.381 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.640 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.640 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:33.898 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:33.898 [100/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:33.898 [101/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:33.898 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:33.898 [103/268] Linking static target lib/librte_rcu.a 00:05:34.157 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:34.157 [105/268] Linking static target lib/librte_mbuf.a 00:05:34.157 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.415 [107/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.415 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.415 [109/268] Linking static target lib/librte_meter.a 00:05:34.674 [110/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.674 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:34.674 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:34.932 [113/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:34.932 [114/268] Linking static target lib/librte_net.a 00:05:34.932 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.191 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:35.191 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.191 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:35.449 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.449 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:35.707 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:35.707 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:36.308 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:36.308 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:36.308 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:36.308 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:36.308 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:36.308 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:36.308 [129/268] Linking static target lib/librte_pci.a 00:05:36.566 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:36.566 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:36.566 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:36.566 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:36.566 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:36.824 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:36.824 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:36.824 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.824 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:36.824 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:36.824 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:37.083 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:37.083 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:37.083 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:37.083 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:37.083 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:37.649 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:37.649 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:37.649 [148/268] Linking static target lib/librte_cmdline.a 00:05:37.649 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:37.649 [150/268] Linking static target lib/librte_timer.a 00:05:37.907 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:37.907 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:38.164 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:38.422 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:38.422 [155/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.680 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:38.938 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:38.938 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:38.938 [159/268] Linking static target lib/librte_compressdev.a 00:05:38.938 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:39.197 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:39.197 [162/268] Linking static target lib/librte_ethdev.a 00:05:39.456 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:39.714 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.714 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:39.972 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:39.972 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:39.972 [168/268] Linking static target lib/librte_hash.a 00:05:39.972 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:39.972 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:40.230 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.230 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:40.230 [173/268] Linking static target lib/librte_dmadev.a 00:05:40.488 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:40.488 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:40.746 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:40.746 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:40.746 [178/268] Linking static target lib/librte_cryptodev.a 00:05:40.746 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:40.746 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:41.004 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:41.004 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.004 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:41.262 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.904 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:41.904 [186/268] Linking static target lib/librte_power.a 00:05:41.904 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:41.904 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:41.904 [189/268] Linking static target lib/librte_reorder.a 00:05:41.904 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:41.905 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:41.905 [192/268] Linking static target lib/librte_security.a 00:05:42.193 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:42.451 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.710 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:42.967 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.225 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.225 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:43.483 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:43.483 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:43.483 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:43.483 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.741 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:43.741 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:44.309 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:44.309 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:44.309 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:44.309 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:44.309 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:44.567 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:44.567 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:44.567 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:44.567 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:44.567 [214/268] Linking static target drivers/librte_bus_vdev.a 00:05:44.567 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:44.826 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:44.826 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:44.826 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:44.826 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:44.826 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:44.826 [221/268] Linking static target drivers/librte_bus_pci.a 00:05:45.085 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.085 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:45.085 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:45.085 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:45.085 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:45.652 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.218 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.218 [229/268] Linking target lib/librte_eal.so.24.1 00:05:46.476 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:46.476 [231/268] Linking target lib/librte_pci.so.24.1 00:05:46.476 [232/268] Linking target lib/librte_meter.so.24.1 00:05:46.476 [233/268] Linking target lib/librte_ring.so.24.1 00:05:46.476 [234/268] Linking target lib/librte_timer.so.24.1 00:05:46.476 [235/268] Linking target lib/librte_dmadev.so.24.1 00:05:46.476 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:46.737 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:46.737 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:46.737 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:46.737 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:46.737 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:46.737 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:46.737 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:46.737 [244/268] Linking target lib/librte_rcu.so.24.1 00:05:46.737 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:47.075 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:47.075 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:47.075 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:47.075 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:47.075 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:47.075 [251/268] Linking target lib/librte_compressdev.so.24.1 00:05:47.334 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:47.334 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:05:47.334 [254/268] Linking target lib/librte_net.so.24.1 00:05:47.334 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:47.334 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:47.334 [257/268] Linking target lib/librte_security.so.24.1 00:05:47.334 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:47.334 [259/268] Linking target lib/librte_hash.so.24.1 00:05:47.592 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:48.158 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.158 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:48.417 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:48.674 [264/268] Linking target lib/librte_power.so.24.1 00:05:52.860 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:52.860 [266/268] Linking static target lib/librte_vhost.a 00:05:54.321 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.321 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:54.321 INFO: autodetecting backend as ninja 00:05:54.321 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:16.248 CC lib/log/log.o 00:06:16.248 CC lib/log/log_flags.o 00:06:16.248 CC lib/ut_mock/mock.o 00:06:16.248 CC lib/log/log_deprecated.o 00:06:16.248 CC lib/ut/ut.o 00:06:16.248 LIB libspdk_log.a 00:06:16.248 LIB libspdk_ut_mock.a 00:06:16.248 LIB libspdk_ut.a 00:06:16.248 SO libspdk_ut_mock.so.6.0 00:06:16.248 SO libspdk_log.so.7.1 00:06:16.248 SO libspdk_ut.so.2.0 00:06:16.248 SYMLINK libspdk_ut_mock.so 00:06:16.248 SYMLINK libspdk_log.so 00:06:16.248 SYMLINK libspdk_ut.so 00:06:16.248 CC lib/ioat/ioat.o 00:06:16.248 CC lib/util/base64.o 00:06:16.248 CC lib/util/bit_array.o 00:06:16.248 CC lib/util/cpuset.o 00:06:16.248 CC lib/dma/dma.o 00:06:16.248 CC lib/util/crc16.o 00:06:16.248 CC lib/util/crc32c.o 00:06:16.248 CC lib/util/crc32.o 00:06:16.248 CXX lib/trace_parser/trace.o 00:06:16.248 CC lib/vfio_user/host/vfio_user_pci.o 00:06:16.248 CC lib/util/crc32_ieee.o 00:06:16.248 CC lib/util/crc64.o 00:06:16.248 CC lib/util/dif.o 00:06:16.248 CC lib/util/fd.o 00:06:16.248 CC lib/util/fd_group.o 00:06:16.248 LIB libspdk_dma.a 00:06:16.248 SO libspdk_dma.so.5.0 00:06:16.248 CC lib/vfio_user/host/vfio_user.o 00:06:16.248 SYMLINK libspdk_dma.so 00:06:16.248 CC lib/util/file.o 00:06:16.248 CC lib/util/hexlify.o 00:06:16.248 CC lib/util/iov.o 00:06:16.248 CC lib/util/math.o 00:06:16.248 LIB libspdk_ioat.a 00:06:16.248 CC lib/util/net.o 00:06:16.248 SO libspdk_ioat.so.7.0 00:06:16.248 CC lib/util/pipe.o 00:06:16.248 CC lib/util/strerror_tls.o 00:06:16.248 SYMLINK libspdk_ioat.so 00:06:16.248 CC lib/util/string.o 00:06:16.248 CC lib/util/uuid.o 00:06:16.248 LIB libspdk_vfio_user.a 00:06:16.248 SO libspdk_vfio_user.so.5.0 00:06:16.248 CC lib/util/xor.o 00:06:16.248 CC lib/util/zipf.o 00:06:16.248 CC lib/util/md5.o 00:06:16.248 SYMLINK libspdk_vfio_user.so 00:06:16.248 LIB libspdk_util.a 00:06:16.248 LIB libspdk_trace_parser.a 00:06:16.248 SO libspdk_util.so.10.1 00:06:16.248 SO libspdk_trace_parser.so.6.0 00:06:16.248 SYMLINK libspdk_trace_parser.so 00:06:16.248 SYMLINK libspdk_util.so 00:06:16.248 CC lib/rdma_utils/rdma_utils.o 00:06:16.248 CC lib/vmd/vmd.o 00:06:16.248 CC lib/vmd/led.o 00:06:16.248 CC lib/env_dpdk/memory.o 00:06:16.248 CC lib/env_dpdk/env.o 00:06:16.248 CC lib/env_dpdk/init.o 00:06:16.248 CC lib/env_dpdk/pci.o 00:06:16.248 CC lib/idxd/idxd.o 00:06:16.248 CC lib/conf/conf.o 00:06:16.248 CC lib/json/json_parse.o 00:06:16.248 CC lib/env_dpdk/threads.o 00:06:16.248 LIB libspdk_conf.a 00:06:16.248 CC lib/json/json_util.o 00:06:16.248 SO libspdk_conf.so.6.0 00:06:16.248 CC lib/env_dpdk/pci_ioat.o 00:06:16.248 SYMLINK libspdk_conf.so 00:06:16.248 CC lib/env_dpdk/pci_virtio.o 00:06:16.248 LIB libspdk_rdma_utils.a 00:06:16.248 SO libspdk_rdma_utils.so.1.0 00:06:16.506 CC lib/env_dpdk/pci_vmd.o 00:06:16.506 CC lib/env_dpdk/pci_idxd.o 00:06:16.506 CC lib/idxd/idxd_user.o 00:06:16.506 SYMLINK libspdk_rdma_utils.so 00:06:16.506 CC lib/idxd/idxd_kernel.o 00:06:16.506 CC lib/env_dpdk/pci_event.o 00:06:16.506 CC lib/env_dpdk/sigbus_handler.o 00:06:16.506 CC lib/json/json_write.o 00:06:16.506 CC lib/env_dpdk/pci_dpdk.o 00:06:16.506 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:16.764 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:16.764 LIB libspdk_idxd.a 00:06:16.764 SO libspdk_idxd.so.12.1 00:06:16.764 LIB libspdk_vmd.a 00:06:17.025 CC lib/rdma_provider/common.o 00:06:17.025 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:17.025 LIB libspdk_json.a 00:06:17.025 SYMLINK libspdk_idxd.so 00:06:17.025 SO libspdk_vmd.so.6.0 00:06:17.025 SO libspdk_json.so.6.0 00:06:17.025 SYMLINK libspdk_vmd.so 00:06:17.025 SYMLINK libspdk_json.so 00:06:17.282 LIB libspdk_rdma_provider.a 00:06:17.282 SO libspdk_rdma_provider.so.7.0 00:06:17.282 SYMLINK libspdk_rdma_provider.so 00:06:17.282 CC lib/jsonrpc/jsonrpc_server.o 00:06:17.282 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:17.282 CC lib/jsonrpc/jsonrpc_client.o 00:06:17.282 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:17.539 LIB libspdk_jsonrpc.a 00:06:17.797 SO libspdk_jsonrpc.so.6.0 00:06:17.797 SYMLINK libspdk_jsonrpc.so 00:06:17.797 LIB libspdk_env_dpdk.a 00:06:18.054 SO libspdk_env_dpdk.so.15.1 00:06:18.054 CC lib/rpc/rpc.o 00:06:18.311 SYMLINK libspdk_env_dpdk.so 00:06:18.311 LIB libspdk_rpc.a 00:06:18.311 SO libspdk_rpc.so.6.0 00:06:18.569 SYMLINK libspdk_rpc.so 00:06:18.829 CC lib/keyring/keyring.o 00:06:18.829 CC lib/keyring/keyring_rpc.o 00:06:18.829 CC lib/notify/notify.o 00:06:18.829 CC lib/notify/notify_rpc.o 00:06:18.829 CC lib/trace/trace.o 00:06:18.829 CC lib/trace/trace_rpc.o 00:06:18.829 CC lib/trace/trace_flags.o 00:06:18.829 LIB libspdk_notify.a 00:06:19.087 SO libspdk_notify.so.6.0 00:06:19.087 SYMLINK libspdk_notify.so 00:06:19.087 LIB libspdk_keyring.a 00:06:19.087 LIB libspdk_trace.a 00:06:19.087 SO libspdk_keyring.so.2.0 00:06:19.087 SO libspdk_trace.so.11.0 00:06:19.087 SYMLINK libspdk_keyring.so 00:06:19.411 SYMLINK libspdk_trace.so 00:06:19.411 CC lib/thread/thread.o 00:06:19.411 CC lib/thread/iobuf.o 00:06:19.411 CC lib/sock/sock.o 00:06:19.411 CC lib/sock/sock_rpc.o 00:06:19.992 LIB libspdk_sock.a 00:06:20.250 SO libspdk_sock.so.10.0 00:06:20.250 SYMLINK libspdk_sock.so 00:06:20.509 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:20.509 CC lib/nvme/nvme_ctrlr.o 00:06:20.509 CC lib/nvme/nvme_fabric.o 00:06:20.509 CC lib/nvme/nvme_ns.o 00:06:20.509 CC lib/nvme/nvme_pcie.o 00:06:20.509 CC lib/nvme/nvme_pcie_common.o 00:06:20.509 CC lib/nvme/nvme_ns_cmd.o 00:06:20.509 CC lib/nvme/nvme.o 00:06:20.509 CC lib/nvme/nvme_qpair.o 00:06:21.445 CC lib/nvme/nvme_quirks.o 00:06:21.445 CC lib/nvme/nvme_transport.o 00:06:21.445 CC lib/nvme/nvme_discovery.o 00:06:21.445 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:21.445 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:21.704 LIB libspdk_thread.a 00:06:21.704 SO libspdk_thread.so.11.0 00:06:21.704 CC lib/nvme/nvme_tcp.o 00:06:21.704 CC lib/nvme/nvme_opal.o 00:06:21.704 SYMLINK libspdk_thread.so 00:06:21.704 CC lib/nvme/nvme_io_msg.o 00:06:21.962 CC lib/nvme/nvme_poll_group.o 00:06:21.962 CC lib/nvme/nvme_zns.o 00:06:22.220 CC lib/nvme/nvme_stubs.o 00:06:22.220 CC lib/nvme/nvme_auth.o 00:06:22.220 CC lib/nvme/nvme_cuse.o 00:06:22.479 CC lib/accel/accel.o 00:06:22.479 CC lib/nvme/nvme_rdma.o 00:06:22.479 CC lib/accel/accel_rpc.o 00:06:22.737 CC lib/accel/accel_sw.o 00:06:22.997 CC lib/blob/blobstore.o 00:06:22.997 CC lib/init/json_config.o 00:06:22.997 CC lib/virtio/virtio.o 00:06:23.256 CC lib/fsdev/fsdev.o 00:06:23.256 CC lib/init/subsystem.o 00:06:23.514 CC lib/fsdev/fsdev_io.o 00:06:23.514 CC lib/init/subsystem_rpc.o 00:06:23.514 CC lib/virtio/virtio_vhost_user.o 00:06:23.514 CC lib/virtio/virtio_vfio_user.o 00:06:23.514 CC lib/virtio/virtio_pci.o 00:06:23.514 CC lib/init/rpc.o 00:06:23.514 CC lib/fsdev/fsdev_rpc.o 00:06:23.772 CC lib/blob/request.o 00:06:23.772 CC lib/blob/zeroes.o 00:06:23.772 LIB libspdk_init.a 00:06:23.772 LIB libspdk_accel.a 00:06:23.772 SO libspdk_init.so.6.0 00:06:23.772 CC lib/blob/blob_bs_dev.o 00:06:23.772 SO libspdk_accel.so.16.0 00:06:23.772 LIB libspdk_virtio.a 00:06:24.030 SYMLINK libspdk_init.so 00:06:24.030 SYMLINK libspdk_accel.so 00:06:24.030 SO libspdk_virtio.so.7.0 00:06:24.030 SYMLINK libspdk_virtio.so 00:06:24.030 LIB libspdk_fsdev.a 00:06:24.030 CC lib/event/app.o 00:06:24.030 CC lib/event/app_rpc.o 00:06:24.030 CC lib/event/log_rpc.o 00:06:24.031 CC lib/event/reactor.o 00:06:24.031 CC lib/event/scheduler_static.o 00:06:24.031 CC lib/bdev/bdev.o 00:06:24.031 SO libspdk_fsdev.so.2.0 00:06:24.289 CC lib/bdev/bdev_rpc.o 00:06:24.289 SYMLINK libspdk_fsdev.so 00:06:24.289 CC lib/bdev/bdev_zone.o 00:06:24.289 CC lib/bdev/part.o 00:06:24.289 LIB libspdk_nvme.a 00:06:24.289 CC lib/bdev/scsi_nvme.o 00:06:24.548 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:24.548 SO libspdk_nvme.so.15.0 00:06:24.808 LIB libspdk_event.a 00:06:24.808 SO libspdk_event.so.14.0 00:06:24.808 SYMLINK libspdk_event.so 00:06:24.808 SYMLINK libspdk_nvme.so 00:06:25.375 LIB libspdk_fuse_dispatcher.a 00:06:25.375 SO libspdk_fuse_dispatcher.so.1.0 00:06:25.375 SYMLINK libspdk_fuse_dispatcher.so 00:06:27.278 LIB libspdk_blob.a 00:06:27.536 SO libspdk_blob.so.12.0 00:06:27.536 SYMLINK libspdk_blob.so 00:06:27.794 CC lib/lvol/lvol.o 00:06:27.794 LIB libspdk_bdev.a 00:06:27.794 CC lib/blobfs/tree.o 00:06:27.794 CC lib/blobfs/blobfs.o 00:06:28.053 SO libspdk_bdev.so.17.0 00:06:28.053 SYMLINK libspdk_bdev.so 00:06:28.311 CC lib/nvmf/ctrlr.o 00:06:28.311 CC lib/nvmf/ctrlr_discovery.o 00:06:28.311 CC lib/nvmf/ctrlr_bdev.o 00:06:28.311 CC lib/nvmf/subsystem.o 00:06:28.311 CC lib/scsi/dev.o 00:06:28.311 CC lib/ublk/ublk.o 00:06:28.311 CC lib/nbd/nbd.o 00:06:28.311 CC lib/ftl/ftl_core.o 00:06:28.569 CC lib/scsi/lun.o 00:06:28.843 CC lib/nbd/nbd_rpc.o 00:06:28.843 CC lib/ftl/ftl_init.o 00:06:28.843 CC lib/ublk/ublk_rpc.o 00:06:29.146 CC lib/scsi/port.o 00:06:29.146 LIB libspdk_blobfs.a 00:06:29.146 LIB libspdk_nbd.a 00:06:29.146 SO libspdk_nbd.so.7.0 00:06:29.146 SO libspdk_blobfs.so.11.0 00:06:29.146 CC lib/ftl/ftl_layout.o 00:06:29.146 SYMLINK libspdk_nbd.so 00:06:29.146 SYMLINK libspdk_blobfs.so 00:06:29.146 CC lib/ftl/ftl_debug.o 00:06:29.146 CC lib/scsi/scsi.o 00:06:29.146 CC lib/scsi/scsi_bdev.o 00:06:29.146 LIB libspdk_ublk.a 00:06:29.146 CC lib/ftl/ftl_io.o 00:06:29.146 SO libspdk_ublk.so.3.0 00:06:29.146 LIB libspdk_lvol.a 00:06:29.146 SO libspdk_lvol.so.11.0 00:06:29.146 CC lib/nvmf/nvmf.o 00:06:29.407 SYMLINK libspdk_ublk.so 00:06:29.407 CC lib/nvmf/nvmf_rpc.o 00:06:29.407 CC lib/nvmf/transport.o 00:06:29.407 SYMLINK libspdk_lvol.so 00:06:29.407 CC lib/scsi/scsi_pr.o 00:06:29.407 CC lib/scsi/scsi_rpc.o 00:06:29.407 CC lib/ftl/ftl_sb.o 00:06:29.407 CC lib/ftl/ftl_l2p.o 00:06:29.665 CC lib/scsi/task.o 00:06:29.665 CC lib/ftl/ftl_l2p_flat.o 00:06:29.665 CC lib/ftl/ftl_nv_cache.o 00:06:29.665 CC lib/ftl/ftl_band.o 00:06:29.923 CC lib/ftl/ftl_band_ops.o 00:06:29.923 LIB libspdk_scsi.a 00:06:29.923 SO libspdk_scsi.so.9.0 00:06:29.923 CC lib/ftl/ftl_writer.o 00:06:29.923 CC lib/ftl/ftl_rq.o 00:06:29.923 SYMLINK libspdk_scsi.so 00:06:29.923 CC lib/nvmf/tcp.o 00:06:30.181 CC lib/ftl/ftl_reloc.o 00:06:30.181 CC lib/ftl/ftl_l2p_cache.o 00:06:30.181 CC lib/nvmf/stubs.o 00:06:30.181 CC lib/ftl/ftl_p2l.o 00:06:30.181 CC lib/ftl/ftl_p2l_log.o 00:06:30.439 CC lib/nvmf/mdns_server.o 00:06:30.439 CC lib/ftl/mngt/ftl_mngt.o 00:06:30.698 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:30.698 CC lib/nvmf/rdma.o 00:06:30.698 CC lib/nvmf/auth.o 00:06:30.955 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:30.955 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:30.955 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:30.955 CC lib/iscsi/conn.o 00:06:30.955 CC lib/vhost/vhost.o 00:06:30.955 CC lib/vhost/vhost_rpc.o 00:06:30.955 CC lib/vhost/vhost_scsi.o 00:06:31.212 CC lib/iscsi/init_grp.o 00:06:31.212 CC lib/iscsi/iscsi.o 00:06:31.212 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:31.470 CC lib/iscsi/param.o 00:06:31.726 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:31.726 CC lib/iscsi/portal_grp.o 00:06:31.726 CC lib/iscsi/tgt_node.o 00:06:31.726 CC lib/iscsi/iscsi_subsystem.o 00:06:31.983 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:31.983 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:31.983 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:31.983 CC lib/vhost/vhost_blk.o 00:06:31.983 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:32.241 CC lib/vhost/rte_vhost_user.o 00:06:32.241 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:32.241 CC lib/iscsi/iscsi_rpc.o 00:06:32.241 CC lib/iscsi/task.o 00:06:32.241 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:32.241 CC lib/ftl/utils/ftl_conf.o 00:06:32.498 CC lib/ftl/utils/ftl_md.o 00:06:32.498 CC lib/ftl/utils/ftl_mempool.o 00:06:32.498 CC lib/ftl/utils/ftl_bitmap.o 00:06:32.498 CC lib/ftl/utils/ftl_property.o 00:06:32.874 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:32.874 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:32.874 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:32.874 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:32.874 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:32.874 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:33.156 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:33.157 LIB libspdk_iscsi.a 00:06:33.157 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:33.157 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:33.157 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:33.157 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:33.157 SO libspdk_iscsi.so.8.0 00:06:33.157 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:33.157 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:33.157 CC lib/ftl/base/ftl_base_dev.o 00:06:33.157 CC lib/ftl/base/ftl_base_bdev.o 00:06:33.415 CC lib/ftl/ftl_trace.o 00:06:33.415 SYMLINK libspdk_iscsi.so 00:06:33.415 LIB libspdk_vhost.a 00:06:33.674 SO libspdk_vhost.so.8.0 00:06:33.674 LIB libspdk_ftl.a 00:06:33.674 LIB libspdk_nvmf.a 00:06:33.674 SYMLINK libspdk_vhost.so 00:06:33.934 SO libspdk_nvmf.so.20.0 00:06:33.934 SO libspdk_ftl.so.9.0 00:06:34.193 SYMLINK libspdk_nvmf.so 00:06:34.193 SYMLINK libspdk_ftl.so 00:06:34.761 CC module/env_dpdk/env_dpdk_rpc.o 00:06:34.761 CC module/sock/posix/posix.o 00:06:34.761 CC module/accel/ioat/accel_ioat.o 00:06:34.761 CC module/keyring/linux/keyring.o 00:06:34.761 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:34.761 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:34.761 CC module/keyring/file/keyring.o 00:06:34.761 CC module/accel/error/accel_error.o 00:06:34.761 CC module/blob/bdev/blob_bdev.o 00:06:34.761 CC module/fsdev/aio/fsdev_aio.o 00:06:34.761 LIB libspdk_env_dpdk_rpc.a 00:06:34.761 SO libspdk_env_dpdk_rpc.so.6.0 00:06:35.020 SYMLINK libspdk_env_dpdk_rpc.so 00:06:35.020 CC module/keyring/linux/keyring_rpc.o 00:06:35.020 CC module/keyring/file/keyring_rpc.o 00:06:35.020 CC module/accel/ioat/accel_ioat_rpc.o 00:06:35.020 LIB libspdk_scheduler_dpdk_governor.a 00:06:35.020 LIB libspdk_scheduler_dynamic.a 00:06:35.020 CC module/accel/error/accel_error_rpc.o 00:06:35.020 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:35.020 SO libspdk_scheduler_dynamic.so.4.0 00:06:35.020 CC module/scheduler/gscheduler/gscheduler.o 00:06:35.020 LIB libspdk_blob_bdev.a 00:06:35.020 LIB libspdk_keyring_linux.a 00:06:35.278 LIB libspdk_accel_ioat.a 00:06:35.278 SO libspdk_keyring_linux.so.1.0 00:06:35.278 SYMLINK libspdk_scheduler_dynamic.so 00:06:35.278 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:35.278 LIB libspdk_keyring_file.a 00:06:35.278 SO libspdk_blob_bdev.so.12.0 00:06:35.278 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:35.278 CC module/fsdev/aio/linux_aio_mgr.o 00:06:35.278 SO libspdk_accel_ioat.so.6.0 00:06:35.278 SO libspdk_keyring_file.so.2.0 00:06:35.278 LIB libspdk_accel_error.a 00:06:35.278 SYMLINK libspdk_keyring_linux.so 00:06:35.278 SYMLINK libspdk_blob_bdev.so 00:06:35.278 SYMLINK libspdk_accel_ioat.so 00:06:35.278 SO libspdk_accel_error.so.2.0 00:06:35.278 LIB libspdk_scheduler_gscheduler.a 00:06:35.278 SYMLINK libspdk_keyring_file.so 00:06:35.278 SO libspdk_scheduler_gscheduler.so.4.0 00:06:35.278 SYMLINK libspdk_accel_error.so 00:06:35.536 SYMLINK libspdk_scheduler_gscheduler.so 00:06:35.536 CC module/accel/dsa/accel_dsa.o 00:06:35.536 CC module/accel/iaa/accel_iaa.o 00:06:35.536 CC module/bdev/delay/vbdev_delay.o 00:06:35.536 CC module/bdev/gpt/gpt.o 00:06:35.536 CC module/bdev/lvol/vbdev_lvol.o 00:06:35.536 CC module/bdev/error/vbdev_error.o 00:06:35.536 CC module/bdev/malloc/bdev_malloc.o 00:06:35.536 CC module/blobfs/bdev/blobfs_bdev.o 00:06:35.795 LIB libspdk_fsdev_aio.a 00:06:35.795 LIB libspdk_sock_posix.a 00:06:35.795 SO libspdk_fsdev_aio.so.1.0 00:06:35.795 CC module/accel/iaa/accel_iaa_rpc.o 00:06:35.795 SO libspdk_sock_posix.so.6.0 00:06:35.795 CC module/bdev/gpt/vbdev_gpt.o 00:06:35.795 CC module/accel/dsa/accel_dsa_rpc.o 00:06:35.795 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:35.795 SYMLINK libspdk_fsdev_aio.so 00:06:35.795 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:35.795 SYMLINK libspdk_sock_posix.so 00:06:35.795 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:36.053 CC module/bdev/error/vbdev_error_rpc.o 00:06:36.053 LIB libspdk_accel_iaa.a 00:06:36.053 LIB libspdk_accel_dsa.a 00:06:36.053 SO libspdk_accel_iaa.so.3.0 00:06:36.053 SO libspdk_accel_dsa.so.5.0 00:06:36.053 LIB libspdk_blobfs_bdev.a 00:06:36.053 SO libspdk_blobfs_bdev.so.6.0 00:06:36.053 SYMLINK libspdk_accel_iaa.so 00:06:36.053 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:36.053 LIB libspdk_bdev_delay.a 00:06:36.053 SYMLINK libspdk_accel_dsa.so 00:06:36.053 LIB libspdk_bdev_error.a 00:06:36.053 SO libspdk_bdev_delay.so.6.0 00:06:36.053 SYMLINK libspdk_blobfs_bdev.so 00:06:36.053 LIB libspdk_bdev_gpt.a 00:06:36.053 SO libspdk_bdev_error.so.6.0 00:06:36.312 SO libspdk_bdev_gpt.so.6.0 00:06:36.312 SYMLINK libspdk_bdev_delay.so 00:06:36.312 SYMLINK libspdk_bdev_error.so 00:06:36.312 CC module/bdev/null/bdev_null.o 00:06:36.312 SYMLINK libspdk_bdev_gpt.so 00:06:36.312 LIB libspdk_bdev_malloc.a 00:06:36.312 CC module/bdev/passthru/vbdev_passthru.o 00:06:36.312 CC module/bdev/nvme/bdev_nvme.o 00:06:36.312 SO libspdk_bdev_malloc.so.6.0 00:06:36.312 CC module/bdev/raid/bdev_raid.o 00:06:36.312 LIB libspdk_bdev_lvol.a 00:06:36.312 SYMLINK libspdk_bdev_malloc.so 00:06:36.312 CC module/bdev/split/vbdev_split.o 00:06:36.570 SO libspdk_bdev_lvol.so.6.0 00:06:36.570 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:36.570 CC module/bdev/raid/bdev_raid_rpc.o 00:06:36.570 CC module/bdev/aio/bdev_aio.o 00:06:36.570 CC module/bdev/ftl/bdev_ftl.o 00:06:36.570 SYMLINK libspdk_bdev_lvol.so 00:06:36.570 CC module/bdev/raid/bdev_raid_sb.o 00:06:36.570 CC module/bdev/null/bdev_null_rpc.o 00:06:36.570 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:36.828 CC module/bdev/split/vbdev_split_rpc.o 00:06:36.828 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:36.828 LIB libspdk_bdev_null.a 00:06:36.828 SO libspdk_bdev_null.so.6.0 00:06:36.828 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:36.828 CC module/bdev/nvme/nvme_rpc.o 00:06:36.828 LIB libspdk_bdev_passthru.a 00:06:36.828 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:36.828 LIB libspdk_bdev_split.a 00:06:36.828 CC module/bdev/aio/bdev_aio_rpc.o 00:06:36.828 SO libspdk_bdev_passthru.so.6.0 00:06:36.828 SYMLINK libspdk_bdev_null.so 00:06:36.828 SO libspdk_bdev_split.so.6.0 00:06:37.086 SYMLINK libspdk_bdev_passthru.so 00:06:37.086 SYMLINK libspdk_bdev_split.so 00:06:37.086 CC module/bdev/raid/raid0.o 00:06:37.086 CC module/bdev/raid/raid1.o 00:06:37.086 LIB libspdk_bdev_zone_block.a 00:06:37.086 LIB libspdk_bdev_aio.a 00:06:37.086 LIB libspdk_bdev_ftl.a 00:06:37.086 SO libspdk_bdev_zone_block.so.6.0 00:06:37.086 SO libspdk_bdev_aio.so.6.0 00:06:37.086 CC module/bdev/iscsi/bdev_iscsi.o 00:06:37.086 CC module/bdev/nvme/bdev_mdns_client.o 00:06:37.086 SO libspdk_bdev_ftl.so.6.0 00:06:37.344 SYMLINK libspdk_bdev_zone_block.so 00:06:37.344 SYMLINK libspdk_bdev_aio.so 00:06:37.344 CC module/bdev/nvme/vbdev_opal.o 00:06:37.344 SYMLINK libspdk_bdev_ftl.so 00:06:37.344 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:37.344 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:37.344 CC module/bdev/raid/concat.o 00:06:37.344 CC module/bdev/raid/raid5f.o 00:06:37.344 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:37.603 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:37.603 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:37.603 LIB libspdk_bdev_iscsi.a 00:06:37.603 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:37.603 SO libspdk_bdev_iscsi.so.6.0 00:06:37.603 SYMLINK libspdk_bdev_iscsi.so 00:06:38.169 LIB libspdk_bdev_raid.a 00:06:38.169 LIB libspdk_bdev_virtio.a 00:06:38.169 SO libspdk_bdev_raid.so.6.0 00:06:38.169 SO libspdk_bdev_virtio.so.6.0 00:06:38.169 SYMLINK libspdk_bdev_raid.so 00:06:38.169 SYMLINK libspdk_bdev_virtio.so 00:06:40.072 LIB libspdk_bdev_nvme.a 00:06:40.072 SO libspdk_bdev_nvme.so.7.1 00:06:40.330 SYMLINK libspdk_bdev_nvme.so 00:06:40.897 CC module/event/subsystems/iobuf/iobuf.o 00:06:40.897 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:40.897 CC module/event/subsystems/vmd/vmd.o 00:06:40.897 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:40.897 CC module/event/subsystems/keyring/keyring.o 00:06:40.897 CC module/event/subsystems/scheduler/scheduler.o 00:06:40.897 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:40.897 CC module/event/subsystems/sock/sock.o 00:06:40.897 CC module/event/subsystems/fsdev/fsdev.o 00:06:40.897 LIB libspdk_event_vhost_blk.a 00:06:40.897 LIB libspdk_event_fsdev.a 00:06:40.897 LIB libspdk_event_scheduler.a 00:06:40.897 LIB libspdk_event_sock.a 00:06:40.897 LIB libspdk_event_iobuf.a 00:06:40.897 LIB libspdk_event_vmd.a 00:06:40.897 LIB libspdk_event_keyring.a 00:06:40.897 SO libspdk_event_vhost_blk.so.3.0 00:06:40.897 SO libspdk_event_fsdev.so.1.0 00:06:40.897 SO libspdk_event_scheduler.so.4.0 00:06:40.897 SO libspdk_event_sock.so.5.0 00:06:40.897 SO libspdk_event_iobuf.so.3.0 00:06:40.897 SO libspdk_event_vmd.so.6.0 00:06:40.897 SO libspdk_event_keyring.so.1.0 00:06:40.897 SYMLINK libspdk_event_fsdev.so 00:06:41.155 SYMLINK libspdk_event_scheduler.so 00:06:41.155 SYMLINK libspdk_event_vhost_blk.so 00:06:41.155 SYMLINK libspdk_event_sock.so 00:06:41.155 SYMLINK libspdk_event_vmd.so 00:06:41.155 SYMLINK libspdk_event_keyring.so 00:06:41.155 SYMLINK libspdk_event_iobuf.so 00:06:41.413 CC module/event/subsystems/accel/accel.o 00:06:41.413 LIB libspdk_event_accel.a 00:06:41.672 SO libspdk_event_accel.so.6.0 00:06:41.672 SYMLINK libspdk_event_accel.so 00:06:41.930 CC module/event/subsystems/bdev/bdev.o 00:06:42.188 LIB libspdk_event_bdev.a 00:06:42.188 SO libspdk_event_bdev.so.6.0 00:06:42.188 SYMLINK libspdk_event_bdev.so 00:06:42.446 CC module/event/subsystems/scsi/scsi.o 00:06:42.446 CC module/event/subsystems/nbd/nbd.o 00:06:42.446 CC module/event/subsystems/ublk/ublk.o 00:06:42.446 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:42.446 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:42.705 LIB libspdk_event_ublk.a 00:06:42.705 LIB libspdk_event_nbd.a 00:06:42.705 LIB libspdk_event_scsi.a 00:06:42.705 SO libspdk_event_nbd.so.6.0 00:06:42.705 SO libspdk_event_ublk.so.3.0 00:06:42.705 SO libspdk_event_scsi.so.6.0 00:06:42.705 SYMLINK libspdk_event_nbd.so 00:06:42.705 SYMLINK libspdk_event_ublk.so 00:06:42.705 SYMLINK libspdk_event_scsi.so 00:06:42.705 LIB libspdk_event_nvmf.a 00:06:42.963 SO libspdk_event_nvmf.so.6.0 00:06:42.963 SYMLINK libspdk_event_nvmf.so 00:06:42.963 CC module/event/subsystems/iscsi/iscsi.o 00:06:42.963 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:43.221 LIB libspdk_event_vhost_scsi.a 00:06:43.221 LIB libspdk_event_iscsi.a 00:06:43.221 SO libspdk_event_vhost_scsi.so.3.0 00:06:43.221 SO libspdk_event_iscsi.so.6.0 00:06:43.478 SYMLINK libspdk_event_vhost_scsi.so 00:06:43.478 SYMLINK libspdk_event_iscsi.so 00:06:43.478 SO libspdk.so.6.0 00:06:43.478 SYMLINK libspdk.so 00:06:43.736 CC app/trace_record/trace_record.o 00:06:43.736 CXX app/trace/trace.o 00:06:43.736 CC app/spdk_lspci/spdk_lspci.o 00:06:43.736 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:43.736 CC app/nvmf_tgt/nvmf_main.o 00:06:43.994 CC app/iscsi_tgt/iscsi_tgt.o 00:06:43.994 CC app/spdk_tgt/spdk_tgt.o 00:06:43.994 CC examples/util/zipf/zipf.o 00:06:43.994 CC examples/ioat/perf/perf.o 00:06:43.994 CC test/thread/poller_perf/poller_perf.o 00:06:43.994 LINK spdk_lspci 00:06:43.994 LINK interrupt_tgt 00:06:43.994 LINK nvmf_tgt 00:06:44.252 LINK poller_perf 00:06:44.252 LINK zipf 00:06:44.252 LINK iscsi_tgt 00:06:44.252 LINK spdk_tgt 00:06:44.252 LINK spdk_trace_record 00:06:44.252 LINK ioat_perf 00:06:44.252 CC app/spdk_nvme_perf/perf.o 00:06:44.252 LINK spdk_trace 00:06:44.510 CC app/spdk_nvme_discover/discovery_aer.o 00:06:44.510 CC app/spdk_nvme_identify/identify.o 00:06:44.510 TEST_HEADER include/spdk/accel.h 00:06:44.510 TEST_HEADER include/spdk/accel_module.h 00:06:44.510 TEST_HEADER include/spdk/assert.h 00:06:44.510 TEST_HEADER include/spdk/barrier.h 00:06:44.510 TEST_HEADER include/spdk/base64.h 00:06:44.510 TEST_HEADER include/spdk/bdev.h 00:06:44.510 TEST_HEADER include/spdk/bdev_module.h 00:06:44.510 TEST_HEADER include/spdk/bdev_zone.h 00:06:44.510 TEST_HEADER include/spdk/bit_array.h 00:06:44.510 TEST_HEADER include/spdk/bit_pool.h 00:06:44.510 CC examples/ioat/verify/verify.o 00:06:44.510 TEST_HEADER include/spdk/blob_bdev.h 00:06:44.510 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:44.510 TEST_HEADER include/spdk/blobfs.h 00:06:44.510 TEST_HEADER include/spdk/blob.h 00:06:44.510 TEST_HEADER include/spdk/conf.h 00:06:44.510 TEST_HEADER include/spdk/config.h 00:06:44.510 TEST_HEADER include/spdk/cpuset.h 00:06:44.510 TEST_HEADER include/spdk/crc16.h 00:06:44.510 TEST_HEADER include/spdk/crc32.h 00:06:44.510 TEST_HEADER include/spdk/crc64.h 00:06:44.510 TEST_HEADER include/spdk/dif.h 00:06:44.510 TEST_HEADER include/spdk/dma.h 00:06:44.510 TEST_HEADER include/spdk/endian.h 00:06:44.510 TEST_HEADER include/spdk/env_dpdk.h 00:06:44.510 TEST_HEADER include/spdk/env.h 00:06:44.510 TEST_HEADER include/spdk/event.h 00:06:44.510 TEST_HEADER include/spdk/fd_group.h 00:06:44.510 CC test/dma/test_dma/test_dma.o 00:06:44.510 TEST_HEADER include/spdk/fd.h 00:06:44.510 TEST_HEADER include/spdk/file.h 00:06:44.510 TEST_HEADER include/spdk/fsdev.h 00:06:44.510 TEST_HEADER include/spdk/fsdev_module.h 00:06:44.510 TEST_HEADER include/spdk/ftl.h 00:06:44.510 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:44.510 TEST_HEADER include/spdk/gpt_spec.h 00:06:44.510 TEST_HEADER include/spdk/hexlify.h 00:06:44.510 TEST_HEADER include/spdk/histogram_data.h 00:06:44.510 TEST_HEADER include/spdk/idxd.h 00:06:44.510 TEST_HEADER include/spdk/idxd_spec.h 00:06:44.510 TEST_HEADER include/spdk/init.h 00:06:44.510 TEST_HEADER include/spdk/ioat.h 00:06:44.510 TEST_HEADER include/spdk/ioat_spec.h 00:06:44.510 TEST_HEADER include/spdk/iscsi_spec.h 00:06:44.510 TEST_HEADER include/spdk/json.h 00:06:44.510 TEST_HEADER include/spdk/jsonrpc.h 00:06:44.510 CC test/event/event_perf/event_perf.o 00:06:44.510 TEST_HEADER include/spdk/keyring.h 00:06:44.510 TEST_HEADER include/spdk/keyring_module.h 00:06:44.510 TEST_HEADER include/spdk/likely.h 00:06:44.510 TEST_HEADER include/spdk/log.h 00:06:44.510 TEST_HEADER include/spdk/lvol.h 00:06:44.510 TEST_HEADER include/spdk/md5.h 00:06:44.510 TEST_HEADER include/spdk/memory.h 00:06:44.510 CC app/spdk_top/spdk_top.o 00:06:44.510 TEST_HEADER include/spdk/mmio.h 00:06:44.510 TEST_HEADER include/spdk/nbd.h 00:06:44.510 TEST_HEADER include/spdk/net.h 00:06:44.510 TEST_HEADER include/spdk/notify.h 00:06:44.510 TEST_HEADER include/spdk/nvme.h 00:06:44.510 TEST_HEADER include/spdk/nvme_intel.h 00:06:44.511 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:44.511 CC test/app/bdev_svc/bdev_svc.o 00:06:44.511 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:44.769 TEST_HEADER include/spdk/nvme_spec.h 00:06:44.769 TEST_HEADER include/spdk/nvme_zns.h 00:06:44.769 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:44.769 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:44.769 TEST_HEADER include/spdk/nvmf.h 00:06:44.769 TEST_HEADER include/spdk/nvmf_spec.h 00:06:44.769 TEST_HEADER include/spdk/nvmf_transport.h 00:06:44.769 TEST_HEADER include/spdk/opal.h 00:06:44.769 TEST_HEADER include/spdk/opal_spec.h 00:06:44.769 TEST_HEADER include/spdk/pci_ids.h 00:06:44.769 TEST_HEADER include/spdk/pipe.h 00:06:44.769 TEST_HEADER include/spdk/queue.h 00:06:44.769 TEST_HEADER include/spdk/reduce.h 00:06:44.769 TEST_HEADER include/spdk/rpc.h 00:06:44.769 TEST_HEADER include/spdk/scheduler.h 00:06:44.769 TEST_HEADER include/spdk/scsi.h 00:06:44.769 TEST_HEADER include/spdk/scsi_spec.h 00:06:44.769 TEST_HEADER include/spdk/sock.h 00:06:44.769 TEST_HEADER include/spdk/stdinc.h 00:06:44.769 TEST_HEADER include/spdk/string.h 00:06:44.769 TEST_HEADER include/spdk/thread.h 00:06:44.769 LINK spdk_nvme_discover 00:06:44.769 TEST_HEADER include/spdk/trace.h 00:06:44.769 TEST_HEADER include/spdk/trace_parser.h 00:06:44.769 TEST_HEADER include/spdk/tree.h 00:06:44.769 TEST_HEADER include/spdk/ublk.h 00:06:44.769 TEST_HEADER include/spdk/util.h 00:06:44.769 TEST_HEADER include/spdk/uuid.h 00:06:44.769 TEST_HEADER include/spdk/version.h 00:06:44.769 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:44.769 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:44.769 TEST_HEADER include/spdk/vhost.h 00:06:44.769 TEST_HEADER include/spdk/vmd.h 00:06:44.769 TEST_HEADER include/spdk/xor.h 00:06:44.769 TEST_HEADER include/spdk/zipf.h 00:06:44.769 CXX test/cpp_headers/accel.o 00:06:44.769 CC test/env/mem_callbacks/mem_callbacks.o 00:06:44.769 LINK event_perf 00:06:44.769 LINK verify 00:06:44.769 LINK bdev_svc 00:06:45.028 CXX test/cpp_headers/accel_module.o 00:06:45.028 CC app/vhost/vhost.o 00:06:45.028 CC test/event/reactor/reactor.o 00:06:45.028 CXX test/cpp_headers/assert.o 00:06:45.287 LINK reactor 00:06:45.287 LINK test_dma 00:06:45.287 CC examples/thread/thread/thread_ex.o 00:06:45.287 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:45.287 LINK vhost 00:06:45.287 CXX test/cpp_headers/barrier.o 00:06:45.287 LINK mem_callbacks 00:06:45.287 LINK spdk_nvme_perf 00:06:45.546 CC test/event/reactor_perf/reactor_perf.o 00:06:45.546 CXX test/cpp_headers/base64.o 00:06:45.546 LINK thread 00:06:45.546 CC test/event/app_repeat/app_repeat.o 00:06:45.546 LINK spdk_nvme_identify 00:06:45.546 CC test/event/scheduler/scheduler.o 00:06:45.546 CC test/env/vtophys/vtophys.o 00:06:45.546 LINK reactor_perf 00:06:45.805 CXX test/cpp_headers/bdev.o 00:06:45.805 CC app/spdk_dd/spdk_dd.o 00:06:45.805 LINK app_repeat 00:06:45.805 LINK nvme_fuzz 00:06:45.805 LINK spdk_top 00:06:45.805 LINK vtophys 00:06:45.805 LINK scheduler 00:06:45.805 CC test/app/histogram_perf/histogram_perf.o 00:06:45.805 CXX test/cpp_headers/bdev_module.o 00:06:46.064 CC examples/sock/hello_world/hello_sock.o 00:06:46.064 CC app/fio/nvme/fio_plugin.o 00:06:46.064 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:46.064 CC test/app/jsoncat/jsoncat.o 00:06:46.064 CC test/app/stub/stub.o 00:06:46.064 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:46.064 LINK histogram_perf 00:06:46.064 CXX test/cpp_headers/bdev_zone.o 00:06:46.064 LINK spdk_dd 00:06:46.323 CC test/rpc_client/rpc_client_test.o 00:06:46.323 LINK jsoncat 00:06:46.323 LINK hello_sock 00:06:46.323 LINK env_dpdk_post_init 00:06:46.323 LINK stub 00:06:46.323 CXX test/cpp_headers/bit_array.o 00:06:46.323 CXX test/cpp_headers/bit_pool.o 00:06:46.581 CC examples/vmd/lsvmd/lsvmd.o 00:06:46.581 LINK rpc_client_test 00:06:46.581 CC test/env/memory/memory_ut.o 00:06:46.581 CXX test/cpp_headers/blob_bdev.o 00:06:46.581 LINK lsvmd 00:06:46.581 CC examples/idxd/perf/perf.o 00:06:46.581 CC examples/accel/perf/accel_perf.o 00:06:46.839 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:46.839 LINK spdk_nvme 00:06:46.839 CC test/accel/dif/dif.o 00:06:46.839 CXX test/cpp_headers/blobfs_bdev.o 00:06:46.839 CC test/blobfs/mkfs/mkfs.o 00:06:47.098 CC examples/vmd/led/led.o 00:06:47.098 CC app/fio/bdev/fio_plugin.o 00:06:47.098 CXX test/cpp_headers/blobfs.o 00:06:47.098 LINK hello_fsdev 00:06:47.098 LINK idxd_perf 00:06:47.098 LINK led 00:06:47.098 LINK mkfs 00:06:47.355 CXX test/cpp_headers/blob.o 00:06:47.355 LINK accel_perf 00:06:47.613 CXX test/cpp_headers/conf.o 00:06:47.613 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:47.613 CC test/nvme/aer/aer.o 00:06:47.613 CC test/lvol/esnap/esnap.o 00:06:47.613 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:47.613 LINK spdk_bdev 00:06:47.613 CXX test/cpp_headers/config.o 00:06:47.613 CC examples/blob/hello_world/hello_blob.o 00:06:47.872 CXX test/cpp_headers/cpuset.o 00:06:47.872 LINK dif 00:06:47.872 CC test/env/pci/pci_ut.o 00:06:47.872 CXX test/cpp_headers/crc16.o 00:06:47.872 LINK aer 00:06:47.872 LINK hello_blob 00:06:48.130 CXX test/cpp_headers/crc32.o 00:06:48.130 CXX test/cpp_headers/crc64.o 00:06:48.130 CC examples/nvme/hello_world/hello_world.o 00:06:48.130 LINK vhost_fuzz 00:06:48.130 CXX test/cpp_headers/dif.o 00:06:48.388 LINK memory_ut 00:06:48.388 CC test/nvme/reset/reset.o 00:06:48.388 CC test/nvme/sgl/sgl.o 00:06:48.388 LINK pci_ut 00:06:48.388 CC examples/blob/cli/blobcli.o 00:06:48.388 LINK iscsi_fuzz 00:06:48.388 LINK hello_world 00:06:48.388 CXX test/cpp_headers/dma.o 00:06:48.388 CC test/nvme/e2edp/nvme_dp.o 00:06:48.701 CC test/nvme/overhead/overhead.o 00:06:48.701 LINK reset 00:06:48.701 CXX test/cpp_headers/endian.o 00:06:48.701 LINK sgl 00:06:48.701 CC examples/nvme/reconnect/reconnect.o 00:06:48.701 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:48.988 CC examples/nvme/arbitration/arbitration.o 00:06:48.988 CXX test/cpp_headers/env_dpdk.o 00:06:48.988 LINK nvme_dp 00:06:48.988 CXX test/cpp_headers/env.o 00:06:48.988 CC examples/nvme/hotplug/hotplug.o 00:06:48.988 LINK overhead 00:06:48.988 LINK blobcli 00:06:48.988 CXX test/cpp_headers/event.o 00:06:49.246 CC test/nvme/err_injection/err_injection.o 00:06:49.246 CC test/nvme/startup/startup.o 00:06:49.246 LINK reconnect 00:06:49.246 LINK hotplug 00:06:49.246 LINK arbitration 00:06:49.246 CC test/nvme/reserve/reserve.o 00:06:49.246 CXX test/cpp_headers/fd_group.o 00:06:49.504 LINK err_injection 00:06:49.504 CC test/nvme/simple_copy/simple_copy.o 00:06:49.505 LINK startup 00:06:49.505 LINK nvme_manage 00:06:49.505 CXX test/cpp_headers/fd.o 00:06:49.505 CC test/nvme/connect_stress/connect_stress.o 00:06:49.505 LINK reserve 00:06:49.505 CC test/nvme/boot_partition/boot_partition.o 00:06:49.763 CXX test/cpp_headers/file.o 00:06:49.763 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:49.763 CC test/bdev/bdevio/bdevio.o 00:06:49.763 CC test/nvme/compliance/nvme_compliance.o 00:06:49.763 LINK simple_copy 00:06:49.763 LINK boot_partition 00:06:49.763 LINK connect_stress 00:06:49.763 CC examples/nvme/abort/abort.o 00:06:49.763 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:49.763 CXX test/cpp_headers/fsdev.o 00:06:50.022 LINK cmb_copy 00:06:50.022 CC test/nvme/fused_ordering/fused_ordering.o 00:06:50.022 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:50.022 CC test/nvme/fdp/fdp.o 00:06:50.022 LINK pmr_persistence 00:06:50.022 CXX test/cpp_headers/fsdev_module.o 00:06:50.022 CXX test/cpp_headers/ftl.o 00:06:50.022 LINK nvme_compliance 00:06:50.281 LINK bdevio 00:06:50.281 CXX test/cpp_headers/fuse_dispatcher.o 00:06:50.281 LINK doorbell_aers 00:06:50.281 LINK abort 00:06:50.281 LINK fused_ordering 00:06:50.281 CXX test/cpp_headers/gpt_spec.o 00:06:50.281 CXX test/cpp_headers/hexlify.o 00:06:50.281 CC test/nvme/cuse/cuse.o 00:06:50.281 CXX test/cpp_headers/histogram_data.o 00:06:50.540 CXX test/cpp_headers/idxd.o 00:06:50.540 CXX test/cpp_headers/idxd_spec.o 00:06:50.540 CXX test/cpp_headers/init.o 00:06:50.540 LINK fdp 00:06:50.540 CXX test/cpp_headers/ioat.o 00:06:50.540 CXX test/cpp_headers/ioat_spec.o 00:06:50.540 CC examples/bdev/hello_world/hello_bdev.o 00:06:50.540 CXX test/cpp_headers/iscsi_spec.o 00:06:50.540 CXX test/cpp_headers/json.o 00:06:50.540 CC examples/bdev/bdevperf/bdevperf.o 00:06:50.540 CXX test/cpp_headers/jsonrpc.o 00:06:50.540 CXX test/cpp_headers/keyring.o 00:06:50.798 CXX test/cpp_headers/keyring_module.o 00:06:50.798 CXX test/cpp_headers/likely.o 00:06:50.798 CXX test/cpp_headers/log.o 00:06:50.798 CXX test/cpp_headers/lvol.o 00:06:50.799 CXX test/cpp_headers/md5.o 00:06:50.799 CXX test/cpp_headers/memory.o 00:06:50.799 LINK hello_bdev 00:06:50.799 CXX test/cpp_headers/mmio.o 00:06:51.057 CXX test/cpp_headers/nbd.o 00:06:51.057 CXX test/cpp_headers/net.o 00:06:51.057 CXX test/cpp_headers/notify.o 00:06:51.057 CXX test/cpp_headers/nvme.o 00:06:51.057 CXX test/cpp_headers/nvme_intel.o 00:06:51.057 CXX test/cpp_headers/nvme_ocssd.o 00:06:51.057 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:51.057 CXX test/cpp_headers/nvme_spec.o 00:06:51.057 CXX test/cpp_headers/nvme_zns.o 00:06:51.057 CXX test/cpp_headers/nvmf_cmd.o 00:06:51.316 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:51.316 CXX test/cpp_headers/nvmf.o 00:06:51.316 CXX test/cpp_headers/nvmf_spec.o 00:06:51.316 CXX test/cpp_headers/nvmf_transport.o 00:06:51.316 CXX test/cpp_headers/opal.o 00:06:51.316 CXX test/cpp_headers/opal_spec.o 00:06:51.316 CXX test/cpp_headers/pci_ids.o 00:06:51.316 CXX test/cpp_headers/pipe.o 00:06:51.574 CXX test/cpp_headers/queue.o 00:06:51.574 CXX test/cpp_headers/reduce.o 00:06:51.574 CXX test/cpp_headers/rpc.o 00:06:51.574 CXX test/cpp_headers/scheduler.o 00:06:51.574 CXX test/cpp_headers/scsi.o 00:06:51.574 CXX test/cpp_headers/scsi_spec.o 00:06:51.574 CXX test/cpp_headers/sock.o 00:06:51.574 CXX test/cpp_headers/stdinc.o 00:06:51.574 CXX test/cpp_headers/string.o 00:06:51.833 CXX test/cpp_headers/thread.o 00:06:51.833 LINK bdevperf 00:06:51.833 CXX test/cpp_headers/trace.o 00:06:51.833 CXX test/cpp_headers/trace_parser.o 00:06:51.833 CXX test/cpp_headers/tree.o 00:06:51.833 CXX test/cpp_headers/ublk.o 00:06:51.833 CXX test/cpp_headers/util.o 00:06:51.833 CXX test/cpp_headers/uuid.o 00:06:51.833 CXX test/cpp_headers/version.o 00:06:51.833 CXX test/cpp_headers/vfio_user_pci.o 00:06:51.833 CXX test/cpp_headers/vfio_user_spec.o 00:06:51.833 CXX test/cpp_headers/vhost.o 00:06:51.833 CXX test/cpp_headers/vmd.o 00:06:52.091 CXX test/cpp_headers/xor.o 00:06:52.091 CXX test/cpp_headers/zipf.o 00:06:52.091 LINK cuse 00:06:52.091 CC examples/nvmf/nvmf/nvmf.o 00:06:52.659 LINK nvmf 00:06:55.205 LINK esnap 00:06:55.205 00:06:55.205 real 1m43.463s 00:06:55.205 user 9m33.218s 00:06:55.205 sys 1m53.472s 00:06:55.205 19:26:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:55.205 19:26:48 make -- common/autotest_common.sh@10 -- $ set +x 00:06:55.205 ************************************ 00:06:55.205 END TEST make 00:06:55.205 ************************************ 00:06:55.463 19:26:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:55.463 19:26:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:55.463 19:26:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:55.463 19:26:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.463 19:26:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:55.463 19:26:48 -- pm/common@44 -- $ pid=5251 00:06:55.463 19:26:48 -- pm/common@50 -- $ kill -TERM 5251 00:06:55.463 19:26:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.463 19:26:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:55.463 19:26:48 -- pm/common@44 -- $ pid=5252 00:06:55.463 19:26:48 -- pm/common@50 -- $ kill -TERM 5252 00:06:55.463 19:26:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:55.463 19:26:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:55.463 19:26:48 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.463 19:26:48 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.463 19:26:48 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.463 19:26:48 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.463 19:26:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.463 19:26:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.463 19:26:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.463 19:26:48 -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.463 19:26:48 -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.463 19:26:48 -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.463 19:26:48 -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.463 19:26:48 -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.463 19:26:48 -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.463 19:26:48 -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.463 19:26:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.463 19:26:48 -- scripts/common.sh@344 -- # case "$op" in 00:06:55.463 19:26:48 -- scripts/common.sh@345 -- # : 1 00:06:55.463 19:26:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.463 19:26:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.463 19:26:48 -- scripts/common.sh@365 -- # decimal 1 00:06:55.463 19:26:48 -- scripts/common.sh@353 -- # local d=1 00:06:55.463 19:26:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.463 19:26:48 -- scripts/common.sh@355 -- # echo 1 00:06:55.463 19:26:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.463 19:26:48 -- scripts/common.sh@366 -- # decimal 2 00:06:55.463 19:26:48 -- scripts/common.sh@353 -- # local d=2 00:06:55.463 19:26:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.463 19:26:48 -- scripts/common.sh@355 -- # echo 2 00:06:55.463 19:26:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.463 19:26:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.463 19:26:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.463 19:26:48 -- scripts/common.sh@368 -- # return 0 00:06:55.463 19:26:48 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.463 19:26:48 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.463 --rc genhtml_branch_coverage=1 00:06:55.463 --rc genhtml_function_coverage=1 00:06:55.463 --rc genhtml_legend=1 00:06:55.463 --rc geninfo_all_blocks=1 00:06:55.463 --rc geninfo_unexecuted_blocks=1 00:06:55.463 00:06:55.463 ' 00:06:55.463 19:26:48 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.463 --rc genhtml_branch_coverage=1 00:06:55.463 --rc genhtml_function_coverage=1 00:06:55.464 --rc genhtml_legend=1 00:06:55.464 --rc geninfo_all_blocks=1 00:06:55.464 --rc geninfo_unexecuted_blocks=1 00:06:55.464 00:06:55.464 ' 00:06:55.464 19:26:48 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.464 --rc genhtml_branch_coverage=1 00:06:55.464 --rc genhtml_function_coverage=1 00:06:55.464 --rc genhtml_legend=1 00:06:55.464 --rc geninfo_all_blocks=1 00:06:55.464 --rc geninfo_unexecuted_blocks=1 00:06:55.464 00:06:55.464 ' 00:06:55.464 19:26:48 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.464 --rc genhtml_branch_coverage=1 00:06:55.464 --rc genhtml_function_coverage=1 00:06:55.464 --rc genhtml_legend=1 00:06:55.464 --rc geninfo_all_blocks=1 00:06:55.464 --rc geninfo_unexecuted_blocks=1 00:06:55.464 00:06:55.464 ' 00:06:55.464 19:26:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.464 19:26:48 -- nvmf/common.sh@7 -- # uname -s 00:06:55.464 19:26:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.464 19:26:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.464 19:26:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.464 19:26:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.464 19:26:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.464 19:26:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.464 19:26:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.464 19:26:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.464 19:26:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.464 19:26:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.464 19:26:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:369e65d1-545d-4691-9977-d4c00e5b0446 00:06:55.464 19:26:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=369e65d1-545d-4691-9977-d4c00e5b0446 00:06:55.464 19:26:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.464 19:26:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.464 19:26:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:55.464 19:26:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.464 19:26:48 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.464 19:26:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.464 19:26:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.464 19:26:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.464 19:26:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.464 19:26:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.464 19:26:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.464 19:26:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.464 19:26:48 -- paths/export.sh@5 -- # export PATH 00:06:55.464 19:26:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.464 19:26:48 -- nvmf/common.sh@51 -- # : 0 00:06:55.464 19:26:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.464 19:26:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.464 19:26:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.464 19:26:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.464 19:26:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.722 19:26:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.722 19:26:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.722 19:26:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.722 19:26:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.722 19:26:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:55.722 19:26:48 -- spdk/autotest.sh@32 -- # uname -s 00:06:55.722 19:26:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:55.722 19:26:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:55.722 19:26:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:55.722 19:26:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:55.722 19:26:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:55.722 19:26:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:55.722 19:26:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:55.722 19:26:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:55.722 19:26:48 -- spdk/autotest.sh@48 -- # udevadm_pid=54388 00:06:55.722 19:26:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:55.722 19:26:48 -- pm/common@17 -- # local monitor 00:06:55.722 19:26:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.722 19:26:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.722 19:26:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:55.722 19:26:48 -- pm/common@25 -- # sleep 1 00:06:55.722 19:26:48 -- pm/common@21 -- # date +%s 00:06:55.722 19:26:48 -- pm/common@21 -- # date +%s 00:06:55.722 19:26:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426808 00:06:55.723 19:26:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426808 00:06:55.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426808_collect-vmstat.pm.log 00:06:55.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426808_collect-cpu-load.pm.log 00:06:56.658 19:26:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:56.658 19:26:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:56.658 19:26:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.658 19:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:56.658 19:26:49 -- spdk/autotest.sh@59 -- # create_test_list 00:06:56.658 19:26:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:56.658 19:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:56.658 19:26:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:56.658 19:26:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:56.658 19:26:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:56.658 19:26:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:56.658 19:26:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:56.658 19:26:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:56.658 19:26:50 -- common/autotest_common.sh@1457 -- # uname 00:06:56.658 19:26:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:56.658 19:26:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:56.658 19:26:50 -- common/autotest_common.sh@1477 -- # uname 00:06:56.658 19:26:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:56.658 19:26:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:56.658 19:26:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:56.916 lcov: LCOV version 1.15 00:06:56.916 19:26:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:15.004 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:15.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:29.880 19:27:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:29.880 19:27:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.880 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:29.880 19:27:21 -- spdk/autotest.sh@78 -- # rm -f 00:07:29.880 19:27:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.880 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:29.880 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:29.880 19:27:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:29.880 19:27:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:29.880 19:27:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:29.880 19:27:22 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:29.880 19:27:22 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:29.880 19:27:22 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:29.880 19:27:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:29.880 19:27:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:29.880 19:27:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:29.880 19:27:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:29.880 19:27:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:29.880 19:27:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:29.880 19:27:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:07:29.880 19:27:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:29.880 19:27:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:29.880 19:27:22 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:07:29.880 19:27:22 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:29.880 19:27:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:29.880 19:27:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:29.880 19:27:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:29.880 19:27:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:29.880 19:27:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:29.880 19:27:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:29.880 19:27:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:29.880 19:27:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:29.880 No valid GPT data, bailing 00:07:29.880 19:27:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:29.880 19:27:22 -- scripts/common.sh@394 -- # pt= 00:07:29.880 19:27:22 -- scripts/common.sh@395 -- # return 1 00:07:29.880 19:27:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:29.880 1+0 records in 00:07:29.880 1+0 records out 00:07:29.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486138 s, 216 MB/s 00:07:29.880 19:27:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:29.880 19:27:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:29.880 19:27:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:29.880 19:27:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:29.880 19:27:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:29.880 No valid GPT data, bailing 00:07:29.880 19:27:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:29.880 19:27:22 -- scripts/common.sh@394 -- # pt= 00:07:29.880 19:27:22 -- scripts/common.sh@395 -- # return 1 00:07:29.880 19:27:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:29.880 1+0 records in 00:07:29.880 1+0 records out 00:07:29.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411428 s, 255 MB/s 00:07:29.880 19:27:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:29.880 19:27:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:29.880 19:27:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:29.880 19:27:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:29.880 19:27:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:29.880 No valid GPT data, bailing 00:07:29.881 19:27:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:29.881 19:27:22 -- scripts/common.sh@394 -- # pt= 00:07:29.881 19:27:22 -- scripts/common.sh@395 -- # return 1 00:07:29.881 19:27:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:29.881 1+0 records in 00:07:29.881 1+0 records out 00:07:29.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363494 s, 288 MB/s 00:07:29.881 19:27:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:29.881 19:27:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:29.881 19:27:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:29.881 19:27:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:29.881 19:27:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:29.881 No valid GPT data, bailing 00:07:29.881 19:27:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:29.881 19:27:22 -- scripts/common.sh@394 -- # pt= 00:07:29.881 19:27:22 -- scripts/common.sh@395 -- # return 1 00:07:29.881 19:27:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:29.881 1+0 records in 00:07:29.881 1+0 records out 00:07:29.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437374 s, 240 MB/s 00:07:29.881 19:27:22 -- spdk/autotest.sh@105 -- # sync 00:07:29.881 19:27:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:29.881 19:27:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:29.881 19:27:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:31.784 19:27:24 -- spdk/autotest.sh@111 -- # uname -s 00:07:31.784 19:27:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:31.784 19:27:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:31.784 19:27:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:32.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.351 Hugepages 00:07:32.351 node hugesize free / total 00:07:32.351 node0 1048576kB 0 / 0 00:07:32.351 node0 2048kB 0 / 0 00:07:32.351 00:07:32.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:32.351 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:32.351 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:32.351 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:32.351 19:27:25 -- spdk/autotest.sh@117 -- # uname -s 00:07:32.351 19:27:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:32.351 19:27:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:32.351 19:27:25 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:33.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:33.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.285 19:27:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:34.665 19:27:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:34.665 19:27:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:34.665 19:27:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:34.665 19:27:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:34.665 19:27:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:34.665 19:27:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:34.665 19:27:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:34.665 19:27:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:34.665 19:27:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:34.665 19:27:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:34.665 19:27:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:34.665 19:27:27 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:34.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.665 Waiting for block devices as requested 00:07:34.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.922 19:27:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:34.922 19:27:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:34.922 19:27:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:34.922 19:27:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:34.922 19:27:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1543 -- # continue 00:07:34.922 19:27:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:34.922 19:27:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:34.922 19:27:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:34.922 19:27:28 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:34.922 19:27:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:34.922 19:27:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:34.922 19:27:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:35.180 19:27:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:35.180 19:27:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:35.180 19:27:28 -- common/autotest_common.sh@1543 -- # continue 00:07:35.180 19:27:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:35.180 19:27:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.180 19:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:35.180 19:27:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:35.180 19:27:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.180 19:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:35.180 19:27:28 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.004 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.004 19:27:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:36.005 19:27:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.005 19:27:29 -- common/autotest_common.sh@10 -- # set +x 00:07:36.005 19:27:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:36.005 19:27:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:36.005 19:27:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:36.005 19:27:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:36.005 19:27:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:36.005 19:27:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:36.005 19:27:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:36.005 19:27:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:36.005 19:27:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:36.005 19:27:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:36.005 19:27:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:36.005 19:27:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:36.005 19:27:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:36.005 19:27:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:36.005 19:27:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:36.005 19:27:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:36.005 19:27:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:36.005 19:27:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:36.005 19:27:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:36.005 19:27:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:36.005 19:27:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:36.005 19:27:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:36.005 19:27:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:36.005 19:27:29 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:36.005 19:27:29 -- common/autotest_common.sh@1572 -- # return 0 00:07:36.005 19:27:29 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:36.005 19:27:29 -- common/autotest_common.sh@1580 -- # return 0 00:07:36.005 19:27:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:36.005 19:27:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:36.005 19:27:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:36.005 19:27:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:36.005 19:27:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:36.005 19:27:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.005 19:27:29 -- common/autotest_common.sh@10 -- # set +x 00:07:36.005 19:27:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:36.005 19:27:29 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:36.005 19:27:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.005 19:27:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.005 19:27:29 -- common/autotest_common.sh@10 -- # set +x 00:07:36.005 ************************************ 00:07:36.005 START TEST env 00:07:36.005 ************************************ 00:07:36.005 19:27:29 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:36.264 * Looking for test storage... 00:07:36.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.264 19:27:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.264 19:27:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.264 19:27:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.264 19:27:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.264 19:27:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.264 19:27:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.264 19:27:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.264 19:27:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.264 19:27:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.264 19:27:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.264 19:27:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.264 19:27:29 env -- scripts/common.sh@344 -- # case "$op" in 00:07:36.264 19:27:29 env -- scripts/common.sh@345 -- # : 1 00:07:36.264 19:27:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.264 19:27:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.264 19:27:29 env -- scripts/common.sh@365 -- # decimal 1 00:07:36.264 19:27:29 env -- scripts/common.sh@353 -- # local d=1 00:07:36.264 19:27:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.264 19:27:29 env -- scripts/common.sh@355 -- # echo 1 00:07:36.264 19:27:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.264 19:27:29 env -- scripts/common.sh@366 -- # decimal 2 00:07:36.264 19:27:29 env -- scripts/common.sh@353 -- # local d=2 00:07:36.264 19:27:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.264 19:27:29 env -- scripts/common.sh@355 -- # echo 2 00:07:36.264 19:27:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.264 19:27:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.264 19:27:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.264 19:27:29 env -- scripts/common.sh@368 -- # return 0 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.264 --rc genhtml_branch_coverage=1 00:07:36.264 --rc genhtml_function_coverage=1 00:07:36.264 --rc genhtml_legend=1 00:07:36.264 --rc geninfo_all_blocks=1 00:07:36.264 --rc geninfo_unexecuted_blocks=1 00:07:36.264 00:07:36.264 ' 00:07:36.264 19:27:29 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.264 19:27:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.264 19:27:29 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.264 ************************************ 00:07:36.264 START TEST env_memory 00:07:36.264 ************************************ 00:07:36.264 19:27:29 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:36.264 00:07:36.264 00:07:36.264 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.264 http://cunit.sourceforge.net/ 00:07:36.264 00:07:36.264 00:07:36.264 Suite: memory 00:07:36.264 Test: alloc and free memory map ...[2024-12-05 19:27:29.689351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:36.523 passed 00:07:36.523 Test: mem map translation ...[2024-12-05 19:27:29.748767] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:36.523 [2024-12-05 19:27:29.748847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:36.523 [2024-12-05 19:27:29.748946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:36.523 [2024-12-05 19:27:29.748981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:36.523 passed 00:07:36.524 Test: mem map registration ...[2024-12-05 19:27:29.846822] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:36.524 [2024-12-05 19:27:29.846901] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:36.524 passed 00:07:36.782 Test: mem map adjacent registrations ...passed 00:07:36.782 00:07:36.782 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.782 suites 1 1 n/a 0 0 00:07:36.782 tests 4 4 4 0 0 00:07:36.782 asserts 152 152 152 0 n/a 00:07:36.782 00:07:36.782 Elapsed time = 0.342 seconds 00:07:36.782 ************************************ 00:07:36.782 END TEST env_memory 00:07:36.782 ************************************ 00:07:36.782 00:07:36.782 real 0m0.383s 00:07:36.782 user 0m0.348s 00:07:36.782 sys 0m0.028s 00:07:36.782 19:27:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.782 19:27:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 19:27:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:36.782 19:27:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.782 19:27:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.782 19:27:30 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.782 ************************************ 00:07:36.782 START TEST env_vtophys 00:07:36.782 ************************************ 00:07:36.782 19:27:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:36.782 EAL: lib.eal log level changed from notice to debug 00:07:36.782 EAL: Detected lcore 0 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 1 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 2 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 3 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 4 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 5 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 6 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 7 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 8 as core 0 on socket 0 00:07:36.782 EAL: Detected lcore 9 as core 0 on socket 0 00:07:36.782 EAL: Maximum logical cores by configuration: 128 00:07:36.782 EAL: Detected CPU lcores: 10 00:07:36.782 EAL: Detected NUMA nodes: 1 00:07:36.783 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:36.783 EAL: Detected shared linkage of DPDK 00:07:36.783 EAL: No shared files mode enabled, IPC will be disabled 00:07:36.783 EAL: Selected IOVA mode 'PA' 00:07:36.783 EAL: Probing VFIO support... 00:07:36.783 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:36.783 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:36.783 EAL: Ask a virtual area of 0x2e000 bytes 00:07:36.783 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:36.783 EAL: Setting up physically contiguous memory... 00:07:36.783 EAL: Setting maximum number of open files to 524288 00:07:36.783 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:36.783 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:36.783 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.783 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:36.783 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.783 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.783 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:36.783 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:36.783 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.783 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:36.783 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.783 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.783 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:36.783 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:36.783 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.783 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:36.783 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.783 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.783 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:36.783 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:36.783 EAL: Ask a virtual area of 0x61000 bytes 00:07:36.783 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:36.783 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:36.783 EAL: Ask a virtual area of 0x400000000 bytes 00:07:36.783 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:36.783 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:36.783 EAL: Hugepages will be freed exactly as allocated. 00:07:36.783 EAL: No shared files mode enabled, IPC is disabled 00:07:36.783 EAL: No shared files mode enabled, IPC is disabled 00:07:37.041 EAL: TSC frequency is ~2200000 KHz 00:07:37.041 EAL: Main lcore 0 is ready (tid=7f55c570ca40;cpuset=[0]) 00:07:37.041 EAL: Trying to obtain current memory policy. 00:07:37.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.041 EAL: Restoring previous memory policy: 0 00:07:37.041 EAL: request: mp_malloc_sync 00:07:37.041 EAL: No shared files mode enabled, IPC is disabled 00:07:37.041 EAL: Heap on socket 0 was expanded by 2MB 00:07:37.041 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:37.041 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:37.041 EAL: Mem event callback 'spdk:(nil)' registered 00:07:37.041 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:37.041 00:07:37.041 00:07:37.042 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.042 http://cunit.sourceforge.net/ 00:07:37.042 00:07:37.042 00:07:37.042 Suite: components_suite 00:07:37.610 Test: vtophys_malloc_test ...passed 00:07:37.610 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 4MB 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was shrunk by 4MB 00:07:37.610 EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 6MB 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was shrunk by 6MB 00:07:37.610 EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 10MB 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was shrunk by 10MB 00:07:37.610 EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 18MB 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was shrunk by 18MB 00:07:37.610 EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 34MB 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was shrunk by 34MB 00:07:37.610 EAL: Trying to obtain current memory policy. 00:07:37.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.610 EAL: Restoring previous memory policy: 4 00:07:37.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.610 EAL: request: mp_malloc_sync 00:07:37.610 EAL: No shared files mode enabled, IPC is disabled 00:07:37.610 EAL: Heap on socket 0 was expanded by 66MB 00:07:37.870 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.870 EAL: request: mp_malloc_sync 00:07:37.870 EAL: No shared files mode enabled, IPC is disabled 00:07:37.870 EAL: Heap on socket 0 was shrunk by 66MB 00:07:37.870 EAL: Trying to obtain current memory policy. 00:07:37.870 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:37.870 EAL: Restoring previous memory policy: 4 00:07:37.870 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.870 EAL: request: mp_malloc_sync 00:07:37.870 EAL: No shared files mode enabled, IPC is disabled 00:07:37.870 EAL: Heap on socket 0 was expanded by 130MB 00:07:38.129 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.129 EAL: request: mp_malloc_sync 00:07:38.129 EAL: No shared files mode enabled, IPC is disabled 00:07:38.129 EAL: Heap on socket 0 was shrunk by 130MB 00:07:38.387 EAL: Trying to obtain current memory policy. 00:07:38.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:38.387 EAL: Restoring previous memory policy: 4 00:07:38.387 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.387 EAL: request: mp_malloc_sync 00:07:38.387 EAL: No shared files mode enabled, IPC is disabled 00:07:38.387 EAL: Heap on socket 0 was expanded by 258MB 00:07:38.956 EAL: Calling mem event callback 'spdk:(nil)' 00:07:38.956 EAL: request: mp_malloc_sync 00:07:38.956 EAL: No shared files mode enabled, IPC is disabled 00:07:38.956 EAL: Heap on socket 0 was shrunk by 258MB 00:07:39.215 EAL: Trying to obtain current memory policy. 00:07:39.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.474 EAL: Restoring previous memory policy: 4 00:07:39.474 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.474 EAL: request: mp_malloc_sync 00:07:39.474 EAL: No shared files mode enabled, IPC is disabled 00:07:39.474 EAL: Heap on socket 0 was expanded by 514MB 00:07:40.410 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.410 EAL: request: mp_malloc_sync 00:07:40.410 EAL: No shared files mode enabled, IPC is disabled 00:07:40.410 EAL: Heap on socket 0 was shrunk by 514MB 00:07:40.977 EAL: Trying to obtain current memory policy. 00:07:40.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.235 EAL: Restoring previous memory policy: 4 00:07:41.235 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.235 EAL: request: mp_malloc_sync 00:07:41.235 EAL: No shared files mode enabled, IPC is disabled 00:07:41.235 EAL: Heap on socket 0 was expanded by 1026MB 00:07:43.135 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.135 EAL: request: mp_malloc_sync 00:07:43.135 EAL: No shared files mode enabled, IPC is disabled 00:07:43.135 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:44.510 passed 00:07:44.510 00:07:44.510 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.510 suites 1 1 n/a 0 0 00:07:44.510 tests 2 2 2 0 0 00:07:44.510 asserts 5684 5684 5684 0 n/a 00:07:44.510 00:07:44.510 Elapsed time = 7.464 seconds 00:07:44.510 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.510 EAL: request: mp_malloc_sync 00:07:44.510 EAL: No shared files mode enabled, IPC is disabled 00:07:44.510 EAL: Heap on socket 0 was shrunk by 2MB 00:07:44.510 EAL: No shared files mode enabled, IPC is disabled 00:07:44.510 EAL: No shared files mode enabled, IPC is disabled 00:07:44.510 EAL: No shared files mode enabled, IPC is disabled 00:07:44.510 00:07:44.510 real 0m7.815s 00:07:44.510 user 0m6.562s 00:07:44.510 sys 0m1.076s 00:07:44.510 ************************************ 00:07:44.510 END TEST env_vtophys 00:07:44.510 ************************************ 00:07:44.510 19:27:37 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.510 19:27:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:44.510 19:27:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:44.510 19:27:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.510 19:27:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.510 19:27:37 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.510 ************************************ 00:07:44.510 START TEST env_pci 00:07:44.510 ************************************ 00:07:44.510 19:27:37 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:44.769 00:07:44.769 00:07:44.769 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.769 http://cunit.sourceforge.net/ 00:07:44.769 00:07:44.769 00:07:44.769 Suite: pci 00:07:44.769 Test: pci_hook ...[2024-12-05 19:27:37.965226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56690 has claimed it 00:07:44.769 passed 00:07:44.769 00:07:44.769 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.769 suites 1 1 n/a 0 0 00:07:44.769 tests 1 1 1 0 0 00:07:44.769 asserts 25 25 25 0 n/a 00:07:44.769 00:07:44.769 Elapsed time = 0.012 seconds 00:07:44.769 EAL: Cannot find device (10000:00:01.0) 00:07:44.769 EAL: Failed to attach device on primary process 00:07:44.769 00:07:44.769 real 0m0.097s 00:07:44.769 user 0m0.047s 00:07:44.769 sys 0m0.049s 00:07:44.769 19:27:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.769 19:27:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:44.769 ************************************ 00:07:44.769 END TEST env_pci 00:07:44.769 ************************************ 00:07:44.769 19:27:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:44.769 19:27:38 env -- env/env.sh@15 -- # uname 00:07:44.769 19:27:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:44.769 19:27:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:44.769 19:27:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:44.769 19:27:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:44.769 19:27:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.769 19:27:38 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.769 ************************************ 00:07:44.769 START TEST env_dpdk_post_init 00:07:44.769 ************************************ 00:07:44.769 19:27:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:44.769 EAL: Detected CPU lcores: 10 00:07:44.769 EAL: Detected NUMA nodes: 1 00:07:44.769 EAL: Detected shared linkage of DPDK 00:07:44.769 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:44.769 EAL: Selected IOVA mode 'PA' 00:07:45.028 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:45.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:45.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:45.028 Starting DPDK initialization... 00:07:45.028 Starting SPDK post initialization... 00:07:45.028 SPDK NVMe probe 00:07:45.028 Attaching to 0000:00:10.0 00:07:45.028 Attaching to 0000:00:11.0 00:07:45.028 Attached to 0000:00:10.0 00:07:45.028 Attached to 0000:00:11.0 00:07:45.028 Cleaning up... 00:07:45.028 ************************************ 00:07:45.028 END TEST env_dpdk_post_init 00:07:45.028 ************************************ 00:07:45.028 00:07:45.028 real 0m0.333s 00:07:45.028 user 0m0.108s 00:07:45.028 sys 0m0.122s 00:07:45.028 19:27:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.028 19:27:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:45.028 19:27:38 env -- env/env.sh@26 -- # uname 00:07:45.028 19:27:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:45.028 19:27:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.028 19:27:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.028 19:27:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.028 19:27:38 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.286 ************************************ 00:07:45.286 START TEST env_mem_callbacks 00:07:45.286 ************************************ 00:07:45.286 19:27:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.286 EAL: Detected CPU lcores: 10 00:07:45.286 EAL: Detected NUMA nodes: 1 00:07:45.286 EAL: Detected shared linkage of DPDK 00:07:45.286 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:45.286 EAL: Selected IOVA mode 'PA' 00:07:45.286 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:45.286 00:07:45.286 00:07:45.286 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.286 http://cunit.sourceforge.net/ 00:07:45.286 00:07:45.286 00:07:45.286 Suite: memory 00:07:45.286 Test: test ... 00:07:45.286 register 0x200000200000 2097152 00:07:45.286 malloc 3145728 00:07:45.286 register 0x200000400000 4194304 00:07:45.286 buf 0x2000004fffc0 len 3145728 PASSED 00:07:45.286 malloc 64 00:07:45.286 buf 0x2000004ffec0 len 64 PASSED 00:07:45.286 malloc 4194304 00:07:45.286 register 0x200000800000 6291456 00:07:45.286 buf 0x2000009fffc0 len 4194304 PASSED 00:07:45.286 free 0x2000004fffc0 3145728 00:07:45.286 free 0x2000004ffec0 64 00:07:45.286 unregister 0x200000400000 4194304 PASSED 00:07:45.286 free 0x2000009fffc0 4194304 00:07:45.286 unregister 0x200000800000 6291456 PASSED 00:07:45.286 malloc 8388608 00:07:45.286 register 0x200000400000 10485760 00:07:45.286 buf 0x2000005fffc0 len 8388608 PASSED 00:07:45.286 free 0x2000005fffc0 8388608 00:07:45.543 unregister 0x200000400000 10485760 PASSED 00:07:45.543 passed 00:07:45.543 00:07:45.543 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.543 suites 1 1 n/a 0 0 00:07:45.543 tests 1 1 1 0 0 00:07:45.543 asserts 15 15 15 0 n/a 00:07:45.543 00:07:45.543 Elapsed time = 0.074 seconds 00:07:45.543 00:07:45.543 real 0m0.286s 00:07:45.543 user 0m0.114s 00:07:45.543 sys 0m0.067s 00:07:45.543 ************************************ 00:07:45.543 END TEST env_mem_callbacks 00:07:45.543 ************************************ 00:07:45.543 19:27:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.543 19:27:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:45.543 ************************************ 00:07:45.543 END TEST env 00:07:45.543 ************************************ 00:07:45.543 00:07:45.543 real 0m9.394s 00:07:45.543 user 0m7.386s 00:07:45.543 sys 0m1.596s 00:07:45.543 19:27:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.543 19:27:38 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.543 19:27:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:45.543 19:27:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.543 19:27:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.543 19:27:38 -- common/autotest_common.sh@10 -- # set +x 00:07:45.543 ************************************ 00:07:45.543 START TEST rpc 00:07:45.543 ************************************ 00:07:45.543 19:27:38 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:45.543 * Looking for test storage... 00:07:45.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:45.543 19:27:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.543 19:27:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.543 19:27:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.802 19:27:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.802 19:27:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.802 19:27:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.802 19:27:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.802 19:27:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.802 19:27:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:45.802 19:27:39 rpc -- scripts/common.sh@345 -- # : 1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.802 19:27:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.802 19:27:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@353 -- # local d=1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.802 19:27:39 rpc -- scripts/common.sh@355 -- # echo 1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.802 19:27:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@353 -- # local d=2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.802 19:27:39 rpc -- scripts/common.sh@355 -- # echo 2 00:07:45.802 19:27:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.802 19:27:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.802 19:27:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.802 19:27:39 rpc -- scripts/common.sh@368 -- # return 0 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.802 --rc genhtml_branch_coverage=1 00:07:45.802 --rc genhtml_function_coverage=1 00:07:45.802 --rc genhtml_legend=1 00:07:45.802 --rc geninfo_all_blocks=1 00:07:45.802 --rc geninfo_unexecuted_blocks=1 00:07:45.802 00:07:45.802 ' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.802 --rc genhtml_branch_coverage=1 00:07:45.802 --rc genhtml_function_coverage=1 00:07:45.802 --rc genhtml_legend=1 00:07:45.802 --rc geninfo_all_blocks=1 00:07:45.802 --rc geninfo_unexecuted_blocks=1 00:07:45.802 00:07:45.802 ' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.802 --rc genhtml_branch_coverage=1 00:07:45.802 --rc genhtml_function_coverage=1 00:07:45.802 --rc genhtml_legend=1 00:07:45.802 --rc geninfo_all_blocks=1 00:07:45.802 --rc geninfo_unexecuted_blocks=1 00:07:45.802 00:07:45.802 ' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.802 --rc genhtml_branch_coverage=1 00:07:45.802 --rc genhtml_function_coverage=1 00:07:45.802 --rc genhtml_legend=1 00:07:45.802 --rc geninfo_all_blocks=1 00:07:45.802 --rc geninfo_unexecuted_blocks=1 00:07:45.802 00:07:45.802 ' 00:07:45.802 19:27:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56817 00:07:45.802 19:27:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:45.802 19:27:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:45.802 19:27:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56817 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 56817 ']' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.802 19:27:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.803 [2024-12-05 19:27:39.171836] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:45.803 [2024-12-05 19:27:39.172197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56817 ] 00:07:46.060 [2024-12-05 19:27:39.356204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.318 [2024-12-05 19:27:39.509091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:46.318 [2024-12-05 19:27:39.509408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56817' to capture a snapshot of events at runtime. 00:07:46.318 [2024-12-05 19:27:39.509594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.318 [2024-12-05 19:27:39.509808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.318 [2024-12-05 19:27:39.509978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56817 for offline analysis/debug. 00:07:46.318 [2024-12-05 19:27:39.511795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.252 19:27:40 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.252 19:27:40 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:47.252 19:27:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.252 19:27:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.252 19:27:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:47.252 19:27:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:47.252 19:27:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.252 19:27:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.252 19:27:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.252 ************************************ 00:07:47.252 START TEST rpc_integrity 00:07:47.252 ************************************ 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.252 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.252 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:47.252 { 00:07:47.252 "name": "Malloc0", 00:07:47.253 "aliases": [ 00:07:47.253 "dabcde40-2764-487c-84d4-b88f4a082db0" 00:07:47.253 ], 00:07:47.253 "product_name": "Malloc disk", 00:07:47.253 "block_size": 512, 00:07:47.253 "num_blocks": 16384, 00:07:47.253 "uuid": "dabcde40-2764-487c-84d4-b88f4a082db0", 00:07:47.253 "assigned_rate_limits": { 00:07:47.253 "rw_ios_per_sec": 0, 00:07:47.253 "rw_mbytes_per_sec": 0, 00:07:47.253 "r_mbytes_per_sec": 0, 00:07:47.253 "w_mbytes_per_sec": 0 00:07:47.253 }, 00:07:47.253 "claimed": false, 00:07:47.253 "zoned": false, 00:07:47.253 "supported_io_types": { 00:07:47.253 "read": true, 00:07:47.253 "write": true, 00:07:47.253 "unmap": true, 00:07:47.253 "flush": true, 00:07:47.253 "reset": true, 00:07:47.253 "nvme_admin": false, 00:07:47.253 "nvme_io": false, 00:07:47.253 "nvme_io_md": false, 00:07:47.253 "write_zeroes": true, 00:07:47.253 "zcopy": true, 00:07:47.253 "get_zone_info": false, 00:07:47.253 "zone_management": false, 00:07:47.253 "zone_append": false, 00:07:47.253 "compare": false, 00:07:47.253 "compare_and_write": false, 00:07:47.253 "abort": true, 00:07:47.253 "seek_hole": false, 00:07:47.253 "seek_data": false, 00:07:47.253 "copy": true, 00:07:47.253 "nvme_iov_md": false 00:07:47.253 }, 00:07:47.253 "memory_domains": [ 00:07:47.253 { 00:07:47.253 "dma_device_id": "system", 00:07:47.253 "dma_device_type": 1 00:07:47.253 }, 00:07:47.253 { 00:07:47.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.253 "dma_device_type": 2 00:07:47.253 } 00:07:47.253 ], 00:07:47.253 "driver_specific": {} 00:07:47.253 } 00:07:47.253 ]' 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 [2024-12-05 19:27:40.594215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:47.253 [2024-12-05 19:27:40.594314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.253 [2024-12-05 19:27:40.594349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:47.253 [2024-12-05 19:27:40.594372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.253 [2024-12-05 19:27:40.597771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.253 [2024-12-05 19:27:40.597827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:47.253 Passthru0 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:47.253 { 00:07:47.253 "name": "Malloc0", 00:07:47.253 "aliases": [ 00:07:47.253 "dabcde40-2764-487c-84d4-b88f4a082db0" 00:07:47.253 ], 00:07:47.253 "product_name": "Malloc disk", 00:07:47.253 "block_size": 512, 00:07:47.253 "num_blocks": 16384, 00:07:47.253 "uuid": "dabcde40-2764-487c-84d4-b88f4a082db0", 00:07:47.253 "assigned_rate_limits": { 00:07:47.253 "rw_ios_per_sec": 0, 00:07:47.253 "rw_mbytes_per_sec": 0, 00:07:47.253 "r_mbytes_per_sec": 0, 00:07:47.253 "w_mbytes_per_sec": 0 00:07:47.253 }, 00:07:47.253 "claimed": true, 00:07:47.253 "claim_type": "exclusive_write", 00:07:47.253 "zoned": false, 00:07:47.253 "supported_io_types": { 00:07:47.253 "read": true, 00:07:47.253 "write": true, 00:07:47.253 "unmap": true, 00:07:47.253 "flush": true, 00:07:47.253 "reset": true, 00:07:47.253 "nvme_admin": false, 00:07:47.253 "nvme_io": false, 00:07:47.253 "nvme_io_md": false, 00:07:47.253 "write_zeroes": true, 00:07:47.253 "zcopy": true, 00:07:47.253 "get_zone_info": false, 00:07:47.253 "zone_management": false, 00:07:47.253 "zone_append": false, 00:07:47.253 "compare": false, 00:07:47.253 "compare_and_write": false, 00:07:47.253 "abort": true, 00:07:47.253 "seek_hole": false, 00:07:47.253 "seek_data": false, 00:07:47.253 "copy": true, 00:07:47.253 "nvme_iov_md": false 00:07:47.253 }, 00:07:47.253 "memory_domains": [ 00:07:47.253 { 00:07:47.253 "dma_device_id": "system", 00:07:47.253 "dma_device_type": 1 00:07:47.253 }, 00:07:47.253 { 00:07:47.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.253 "dma_device_type": 2 00:07:47.253 } 00:07:47.253 ], 00:07:47.253 "driver_specific": {} 00:07:47.253 }, 00:07:47.253 { 00:07:47.253 "name": "Passthru0", 00:07:47.253 "aliases": [ 00:07:47.253 "378aed52-669d-5d18-ad57-4fb6627ae374" 00:07:47.253 ], 00:07:47.253 "product_name": "passthru", 00:07:47.253 "block_size": 512, 00:07:47.253 "num_blocks": 16384, 00:07:47.253 "uuid": "378aed52-669d-5d18-ad57-4fb6627ae374", 00:07:47.253 "assigned_rate_limits": { 00:07:47.253 "rw_ios_per_sec": 0, 00:07:47.253 "rw_mbytes_per_sec": 0, 00:07:47.253 "r_mbytes_per_sec": 0, 00:07:47.253 "w_mbytes_per_sec": 0 00:07:47.253 }, 00:07:47.253 "claimed": false, 00:07:47.253 "zoned": false, 00:07:47.253 "supported_io_types": { 00:07:47.253 "read": true, 00:07:47.253 "write": true, 00:07:47.253 "unmap": true, 00:07:47.253 "flush": true, 00:07:47.253 "reset": true, 00:07:47.253 "nvme_admin": false, 00:07:47.253 "nvme_io": false, 00:07:47.253 "nvme_io_md": false, 00:07:47.253 "write_zeroes": true, 00:07:47.253 "zcopy": true, 00:07:47.253 "get_zone_info": false, 00:07:47.253 "zone_management": false, 00:07:47.253 "zone_append": false, 00:07:47.253 "compare": false, 00:07:47.253 "compare_and_write": false, 00:07:47.253 "abort": true, 00:07:47.253 "seek_hole": false, 00:07:47.253 "seek_data": false, 00:07:47.253 "copy": true, 00:07:47.253 "nvme_iov_md": false 00:07:47.253 }, 00:07:47.253 "memory_domains": [ 00:07:47.253 { 00:07:47.253 "dma_device_id": "system", 00:07:47.253 "dma_device_type": 1 00:07:47.253 }, 00:07:47.253 { 00:07:47.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.253 "dma_device_type": 2 00:07:47.253 } 00:07:47.253 ], 00:07:47.253 "driver_specific": { 00:07:47.253 "passthru": { 00:07:47.253 "name": "Passthru0", 00:07:47.253 "base_bdev_name": "Malloc0" 00:07:47.253 } 00:07:47.253 } 00:07:47.253 } 00:07:47.253 ]' 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:47.253 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.511 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.511 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.511 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:47.511 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:47.511 ************************************ 00:07:47.511 END TEST rpc_integrity 00:07:47.511 ************************************ 00:07:47.511 19:27:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:47.511 00:07:47.511 real 0m0.373s 00:07:47.511 user 0m0.221s 00:07:47.511 sys 0m0.043s 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:47.511 19:27:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.511 19:27:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.511 19:27:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 ************************************ 00:07:47.511 START TEST rpc_plugins 00:07:47.511 ************************************ 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:47.511 { 00:07:47.511 "name": "Malloc1", 00:07:47.511 "aliases": [ 00:07:47.511 "c0a4528b-4fa5-4644-a84d-5427d8fe0db9" 00:07:47.511 ], 00:07:47.511 "product_name": "Malloc disk", 00:07:47.511 "block_size": 4096, 00:07:47.511 "num_blocks": 256, 00:07:47.511 "uuid": "c0a4528b-4fa5-4644-a84d-5427d8fe0db9", 00:07:47.511 "assigned_rate_limits": { 00:07:47.511 "rw_ios_per_sec": 0, 00:07:47.511 "rw_mbytes_per_sec": 0, 00:07:47.511 "r_mbytes_per_sec": 0, 00:07:47.511 "w_mbytes_per_sec": 0 00:07:47.511 }, 00:07:47.511 "claimed": false, 00:07:47.511 "zoned": false, 00:07:47.511 "supported_io_types": { 00:07:47.511 "read": true, 00:07:47.511 "write": true, 00:07:47.511 "unmap": true, 00:07:47.511 "flush": true, 00:07:47.511 "reset": true, 00:07:47.511 "nvme_admin": false, 00:07:47.511 "nvme_io": false, 00:07:47.511 "nvme_io_md": false, 00:07:47.511 "write_zeroes": true, 00:07:47.511 "zcopy": true, 00:07:47.511 "get_zone_info": false, 00:07:47.511 "zone_management": false, 00:07:47.511 "zone_append": false, 00:07:47.511 "compare": false, 00:07:47.511 "compare_and_write": false, 00:07:47.511 "abort": true, 00:07:47.511 "seek_hole": false, 00:07:47.511 "seek_data": false, 00:07:47.511 "copy": true, 00:07:47.511 "nvme_iov_md": false 00:07:47.511 }, 00:07:47.511 "memory_domains": [ 00:07:47.511 { 00:07:47.511 "dma_device_id": "system", 00:07:47.511 "dma_device_type": 1 00:07:47.511 }, 00:07:47.511 { 00:07:47.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.511 "dma_device_type": 2 00:07:47.511 } 00:07:47.511 ], 00:07:47.511 "driver_specific": {} 00:07:47.511 } 00:07:47.511 ]' 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:47.511 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.511 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.769 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:47.769 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.769 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:47.769 19:27:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.769 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:47.769 19:27:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:47.769 19:27:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:47.769 ************************************ 00:07:47.769 END TEST rpc_plugins 00:07:47.769 ************************************ 00:07:47.769 00:07:47.769 real 0m0.183s 00:07:47.769 user 0m0.128s 00:07:47.769 sys 0m0.012s 00:07:47.769 19:27:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.770 19:27:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:47.770 19:27:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:47.770 19:27:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.770 19:27:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.770 19:27:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.770 ************************************ 00:07:47.770 START TEST rpc_trace_cmd_test 00:07:47.770 ************************************ 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:47.770 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56817", 00:07:47.770 "tpoint_group_mask": "0x8", 00:07:47.770 "iscsi_conn": { 00:07:47.770 "mask": "0x2", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "scsi": { 00:07:47.770 "mask": "0x4", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "bdev": { 00:07:47.770 "mask": "0x8", 00:07:47.770 "tpoint_mask": "0xffffffffffffffff" 00:07:47.770 }, 00:07:47.770 "nvmf_rdma": { 00:07:47.770 "mask": "0x10", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "nvmf_tcp": { 00:07:47.770 "mask": "0x20", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "ftl": { 00:07:47.770 "mask": "0x40", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "blobfs": { 00:07:47.770 "mask": "0x80", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "dsa": { 00:07:47.770 "mask": "0x200", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "thread": { 00:07:47.770 "mask": "0x400", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "nvme_pcie": { 00:07:47.770 "mask": "0x800", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "iaa": { 00:07:47.770 "mask": "0x1000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "nvme_tcp": { 00:07:47.770 "mask": "0x2000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "bdev_nvme": { 00:07:47.770 "mask": "0x4000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "sock": { 00:07:47.770 "mask": "0x8000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "blob": { 00:07:47.770 "mask": "0x10000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "bdev_raid": { 00:07:47.770 "mask": "0x20000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 }, 00:07:47.770 "scheduler": { 00:07:47.770 "mask": "0x40000", 00:07:47.770 "tpoint_mask": "0x0" 00:07:47.770 } 00:07:47.770 }' 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:47.770 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:48.029 ************************************ 00:07:48.029 END TEST rpc_trace_cmd_test 00:07:48.029 ************************************ 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:48.029 00:07:48.029 real 0m0.278s 00:07:48.029 user 0m0.237s 00:07:48.029 sys 0m0.031s 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.029 19:27:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.029 19:27:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:48.029 19:27:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:48.029 19:27:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:48.029 19:27:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.029 19:27:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.029 19:27:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.029 ************************************ 00:07:48.029 START TEST rpc_daemon_integrity 00:07:48.029 ************************************ 00:07:48.029 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:48.029 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:48.029 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.029 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.030 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.030 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:48.030 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:48.288 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:48.288 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:48.288 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.288 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.288 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:48.289 { 00:07:48.289 "name": "Malloc2", 00:07:48.289 "aliases": [ 00:07:48.289 "cef032a7-8b5d-4173-aa5e-e4cde1fed976" 00:07:48.289 ], 00:07:48.289 "product_name": "Malloc disk", 00:07:48.289 "block_size": 512, 00:07:48.289 "num_blocks": 16384, 00:07:48.289 "uuid": "cef032a7-8b5d-4173-aa5e-e4cde1fed976", 00:07:48.289 "assigned_rate_limits": { 00:07:48.289 "rw_ios_per_sec": 0, 00:07:48.289 "rw_mbytes_per_sec": 0, 00:07:48.289 "r_mbytes_per_sec": 0, 00:07:48.289 "w_mbytes_per_sec": 0 00:07:48.289 }, 00:07:48.289 "claimed": false, 00:07:48.289 "zoned": false, 00:07:48.289 "supported_io_types": { 00:07:48.289 "read": true, 00:07:48.289 "write": true, 00:07:48.289 "unmap": true, 00:07:48.289 "flush": true, 00:07:48.289 "reset": true, 00:07:48.289 "nvme_admin": false, 00:07:48.289 "nvme_io": false, 00:07:48.289 "nvme_io_md": false, 00:07:48.289 "write_zeroes": true, 00:07:48.289 "zcopy": true, 00:07:48.289 "get_zone_info": false, 00:07:48.289 "zone_management": false, 00:07:48.289 "zone_append": false, 00:07:48.289 "compare": false, 00:07:48.289 "compare_and_write": false, 00:07:48.289 "abort": true, 00:07:48.289 "seek_hole": false, 00:07:48.289 "seek_data": false, 00:07:48.289 "copy": true, 00:07:48.289 "nvme_iov_md": false 00:07:48.289 }, 00:07:48.289 "memory_domains": [ 00:07:48.289 { 00:07:48.289 "dma_device_id": "system", 00:07:48.289 "dma_device_type": 1 00:07:48.289 }, 00:07:48.289 { 00:07:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.289 "dma_device_type": 2 00:07:48.289 } 00:07:48.289 ], 00:07:48.289 "driver_specific": {} 00:07:48.289 } 00:07:48.289 ]' 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 [2024-12-05 19:27:41.580661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:48.289 [2024-12-05 19:27:41.580783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.289 [2024-12-05 19:27:41.580819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:48.289 [2024-12-05 19:27:41.580838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.289 [2024-12-05 19:27:41.584002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.289 [2024-12-05 19:27:41.584053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:48.289 Passthru0 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:48.289 { 00:07:48.289 "name": "Malloc2", 00:07:48.289 "aliases": [ 00:07:48.289 "cef032a7-8b5d-4173-aa5e-e4cde1fed976" 00:07:48.289 ], 00:07:48.289 "product_name": "Malloc disk", 00:07:48.289 "block_size": 512, 00:07:48.289 "num_blocks": 16384, 00:07:48.289 "uuid": "cef032a7-8b5d-4173-aa5e-e4cde1fed976", 00:07:48.289 "assigned_rate_limits": { 00:07:48.289 "rw_ios_per_sec": 0, 00:07:48.289 "rw_mbytes_per_sec": 0, 00:07:48.289 "r_mbytes_per_sec": 0, 00:07:48.289 "w_mbytes_per_sec": 0 00:07:48.289 }, 00:07:48.289 "claimed": true, 00:07:48.289 "claim_type": "exclusive_write", 00:07:48.289 "zoned": false, 00:07:48.289 "supported_io_types": { 00:07:48.289 "read": true, 00:07:48.289 "write": true, 00:07:48.289 "unmap": true, 00:07:48.289 "flush": true, 00:07:48.289 "reset": true, 00:07:48.289 "nvme_admin": false, 00:07:48.289 "nvme_io": false, 00:07:48.289 "nvme_io_md": false, 00:07:48.289 "write_zeroes": true, 00:07:48.289 "zcopy": true, 00:07:48.289 "get_zone_info": false, 00:07:48.289 "zone_management": false, 00:07:48.289 "zone_append": false, 00:07:48.289 "compare": false, 00:07:48.289 "compare_and_write": false, 00:07:48.289 "abort": true, 00:07:48.289 "seek_hole": false, 00:07:48.289 "seek_data": false, 00:07:48.289 "copy": true, 00:07:48.289 "nvme_iov_md": false 00:07:48.289 }, 00:07:48.289 "memory_domains": [ 00:07:48.289 { 00:07:48.289 "dma_device_id": "system", 00:07:48.289 "dma_device_type": 1 00:07:48.289 }, 00:07:48.289 { 00:07:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.289 "dma_device_type": 2 00:07:48.289 } 00:07:48.289 ], 00:07:48.289 "driver_specific": {} 00:07:48.289 }, 00:07:48.289 { 00:07:48.289 "name": "Passthru0", 00:07:48.289 "aliases": [ 00:07:48.289 "c7d25174-aa11-5cf2-92d2-1695601cb4c1" 00:07:48.289 ], 00:07:48.289 "product_name": "passthru", 00:07:48.289 "block_size": 512, 00:07:48.289 "num_blocks": 16384, 00:07:48.289 "uuid": "c7d25174-aa11-5cf2-92d2-1695601cb4c1", 00:07:48.289 "assigned_rate_limits": { 00:07:48.289 "rw_ios_per_sec": 0, 00:07:48.289 "rw_mbytes_per_sec": 0, 00:07:48.289 "r_mbytes_per_sec": 0, 00:07:48.289 "w_mbytes_per_sec": 0 00:07:48.289 }, 00:07:48.289 "claimed": false, 00:07:48.289 "zoned": false, 00:07:48.289 "supported_io_types": { 00:07:48.289 "read": true, 00:07:48.289 "write": true, 00:07:48.289 "unmap": true, 00:07:48.289 "flush": true, 00:07:48.289 "reset": true, 00:07:48.289 "nvme_admin": false, 00:07:48.289 "nvme_io": false, 00:07:48.289 "nvme_io_md": false, 00:07:48.289 "write_zeroes": true, 00:07:48.289 "zcopy": true, 00:07:48.289 "get_zone_info": false, 00:07:48.289 "zone_management": false, 00:07:48.289 "zone_append": false, 00:07:48.289 "compare": false, 00:07:48.289 "compare_and_write": false, 00:07:48.289 "abort": true, 00:07:48.289 "seek_hole": false, 00:07:48.289 "seek_data": false, 00:07:48.289 "copy": true, 00:07:48.289 "nvme_iov_md": false 00:07:48.289 }, 00:07:48.289 "memory_domains": [ 00:07:48.289 { 00:07:48.289 "dma_device_id": "system", 00:07:48.289 "dma_device_type": 1 00:07:48.289 }, 00:07:48.289 { 00:07:48.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.289 "dma_device_type": 2 00:07:48.289 } 00:07:48.289 ], 00:07:48.289 "driver_specific": { 00:07:48.289 "passthru": { 00:07:48.289 "name": "Passthru0", 00:07:48.289 "base_bdev_name": "Malloc2" 00:07:48.289 } 00:07:48.289 } 00:07:48.289 } 00:07:48.289 ]' 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:48.289 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:48.548 ************************************ 00:07:48.548 END TEST rpc_daemon_integrity 00:07:48.548 ************************************ 00:07:48.548 19:27:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:48.548 00:07:48.548 real 0m0.385s 00:07:48.548 user 0m0.223s 00:07:48.548 sys 0m0.047s 00:07:48.548 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.548 19:27:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.548 19:27:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:48.548 19:27:41 rpc -- rpc/rpc.sh@84 -- # killprocess 56817 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 56817 ']' 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@958 -- # kill -0 56817 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@959 -- # uname 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56817 00:07:48.549 killing process with pid 56817 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56817' 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@973 -- # kill 56817 00:07:48.549 19:27:41 rpc -- common/autotest_common.sh@978 -- # wait 56817 00:07:51.097 00:07:51.097 real 0m5.254s 00:07:51.097 user 0m5.967s 00:07:51.097 sys 0m0.932s 00:07:51.097 19:27:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.097 ************************************ 00:07:51.097 END TEST rpc 00:07:51.097 ************************************ 00:07:51.097 19:27:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.097 19:27:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:51.097 19:27:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.097 19:27:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.097 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:07:51.097 ************************************ 00:07:51.097 START TEST skip_rpc 00:07:51.097 ************************************ 00:07:51.097 19:27:44 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:51.097 * Looking for test storage... 00:07:51.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:51.097 19:27:44 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:51.097 19:27:44 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:51.097 19:27:44 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:51.097 19:27:44 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:51.097 19:27:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.098 19:27:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.098 --rc genhtml_branch_coverage=1 00:07:51.098 --rc genhtml_function_coverage=1 00:07:51.098 --rc genhtml_legend=1 00:07:51.098 --rc geninfo_all_blocks=1 00:07:51.098 --rc geninfo_unexecuted_blocks=1 00:07:51.098 00:07:51.098 ' 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.098 --rc genhtml_branch_coverage=1 00:07:51.098 --rc genhtml_function_coverage=1 00:07:51.098 --rc genhtml_legend=1 00:07:51.098 --rc geninfo_all_blocks=1 00:07:51.098 --rc geninfo_unexecuted_blocks=1 00:07:51.098 00:07:51.098 ' 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.098 --rc genhtml_branch_coverage=1 00:07:51.098 --rc genhtml_function_coverage=1 00:07:51.098 --rc genhtml_legend=1 00:07:51.098 --rc geninfo_all_blocks=1 00:07:51.098 --rc geninfo_unexecuted_blocks=1 00:07:51.098 00:07:51.098 ' 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:51.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.098 --rc genhtml_branch_coverage=1 00:07:51.098 --rc genhtml_function_coverage=1 00:07:51.098 --rc genhtml_legend=1 00:07:51.098 --rc geninfo_all_blocks=1 00:07:51.098 --rc geninfo_unexecuted_blocks=1 00:07:51.098 00:07:51.098 ' 00:07:51.098 19:27:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:51.098 19:27:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:51.098 19:27:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.098 19:27:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.098 ************************************ 00:07:51.098 START TEST skip_rpc 00:07:51.098 ************************************ 00:07:51.098 19:27:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:51.098 19:27:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57052 00:07:51.098 19:27:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.098 19:27:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:51.098 19:27:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:51.098 [2024-12-05 19:27:44.495946] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:51.098 [2024-12-05 19:27:44.496388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57052 ] 00:07:51.357 [2024-12-05 19:27:44.691894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.616 [2024-12-05 19:27:44.847735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57052 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57052 ']' 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57052 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57052 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.887 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.888 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57052' 00:07:56.888 killing process with pid 57052 00:07:56.888 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57052 00:07:56.888 19:27:49 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57052 00:07:58.793 ************************************ 00:07:58.793 END TEST skip_rpc 00:07:58.793 ************************************ 00:07:58.793 00:07:58.793 real 0m7.443s 00:07:58.793 user 0m6.854s 00:07:58.793 sys 0m0.478s 00:07:58.793 19:27:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.793 19:27:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 19:27:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:58.793 19:27:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.793 19:27:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.793 19:27:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 ************************************ 00:07:58.793 START TEST skip_rpc_with_json 00:07:58.793 ************************************ 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:58.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57156 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57156 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57156 ']' 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.793 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.794 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.794 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.794 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.794 19:27:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 [2024-12-05 19:27:51.981633] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:07:58.794 [2024-12-05 19:27:51.981835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57156 ] 00:07:58.794 [2024-12-05 19:27:52.167020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.052 [2024-12-05 19:27:52.304095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.990 [2024-12-05 19:27:53.227013] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:59.990 request: 00:07:59.990 { 00:07:59.990 "trtype": "tcp", 00:07:59.990 "method": "nvmf_get_transports", 00:07:59.990 "req_id": 1 00:07:59.990 } 00:07:59.990 Got JSON-RPC error response 00:07:59.990 response: 00:07:59.990 { 00:07:59.990 "code": -19, 00:07:59.990 "message": "No such device" 00:07:59.990 } 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.990 [2024-12-05 19:27:53.239141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.990 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:59.990 { 00:07:59.990 "subsystems": [ 00:07:59.990 { 00:07:59.990 "subsystem": "fsdev", 00:07:59.990 "config": [ 00:07:59.990 { 00:07:59.990 "method": "fsdev_set_opts", 00:07:59.990 "params": { 00:07:59.990 "fsdev_io_pool_size": 65535, 00:07:59.990 "fsdev_io_cache_size": 256 00:07:59.990 } 00:07:59.990 } 00:07:59.990 ] 00:07:59.990 }, 00:07:59.990 { 00:07:59.990 "subsystem": "keyring", 00:07:59.990 "config": [] 00:07:59.990 }, 00:07:59.990 { 00:07:59.990 "subsystem": "iobuf", 00:07:59.990 "config": [ 00:07:59.990 { 00:07:59.990 "method": "iobuf_set_options", 00:07:59.990 "params": { 00:07:59.990 "small_pool_count": 8192, 00:07:59.990 "large_pool_count": 1024, 00:07:59.990 "small_bufsize": 8192, 00:07:59.990 "large_bufsize": 135168, 00:07:59.990 "enable_numa": false 00:07:59.990 } 00:07:59.990 } 00:07:59.990 ] 00:07:59.990 }, 00:07:59.990 { 00:07:59.990 "subsystem": "sock", 00:07:59.990 "config": [ 00:07:59.990 { 00:07:59.991 "method": "sock_set_default_impl", 00:07:59.991 "params": { 00:07:59.991 "impl_name": "posix" 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "sock_impl_set_options", 00:07:59.991 "params": { 00:07:59.991 "impl_name": "ssl", 00:07:59.991 "recv_buf_size": 4096, 00:07:59.991 "send_buf_size": 4096, 00:07:59.991 "enable_recv_pipe": true, 00:07:59.991 "enable_quickack": false, 00:07:59.991 "enable_placement_id": 0, 00:07:59.991 "enable_zerocopy_send_server": true, 00:07:59.991 "enable_zerocopy_send_client": false, 00:07:59.991 "zerocopy_threshold": 0, 00:07:59.991 "tls_version": 0, 00:07:59.991 "enable_ktls": false 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "sock_impl_set_options", 00:07:59.991 "params": { 00:07:59.991 "impl_name": "posix", 00:07:59.991 "recv_buf_size": 2097152, 00:07:59.991 "send_buf_size": 2097152, 00:07:59.991 "enable_recv_pipe": true, 00:07:59.991 "enable_quickack": false, 00:07:59.991 "enable_placement_id": 0, 00:07:59.991 "enable_zerocopy_send_server": true, 00:07:59.991 "enable_zerocopy_send_client": false, 00:07:59.991 "zerocopy_threshold": 0, 00:07:59.991 "tls_version": 0, 00:07:59.991 "enable_ktls": false 00:07:59.991 } 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "vmd", 00:07:59.991 "config": [] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "accel", 00:07:59.991 "config": [ 00:07:59.991 { 00:07:59.991 "method": "accel_set_options", 00:07:59.991 "params": { 00:07:59.991 "small_cache_size": 128, 00:07:59.991 "large_cache_size": 16, 00:07:59.991 "task_count": 2048, 00:07:59.991 "sequence_count": 2048, 00:07:59.991 "buf_count": 2048 00:07:59.991 } 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "bdev", 00:07:59.991 "config": [ 00:07:59.991 { 00:07:59.991 "method": "bdev_set_options", 00:07:59.991 "params": { 00:07:59.991 "bdev_io_pool_size": 65535, 00:07:59.991 "bdev_io_cache_size": 256, 00:07:59.991 "bdev_auto_examine": true, 00:07:59.991 "iobuf_small_cache_size": 128, 00:07:59.991 "iobuf_large_cache_size": 16 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "bdev_raid_set_options", 00:07:59.991 "params": { 00:07:59.991 "process_window_size_kb": 1024, 00:07:59.991 "process_max_bandwidth_mb_sec": 0 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "bdev_iscsi_set_options", 00:07:59.991 "params": { 00:07:59.991 "timeout_sec": 30 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "bdev_nvme_set_options", 00:07:59.991 "params": { 00:07:59.991 "action_on_timeout": "none", 00:07:59.991 "timeout_us": 0, 00:07:59.991 "timeout_admin_us": 0, 00:07:59.991 "keep_alive_timeout_ms": 10000, 00:07:59.991 "arbitration_burst": 0, 00:07:59.991 "low_priority_weight": 0, 00:07:59.991 "medium_priority_weight": 0, 00:07:59.991 "high_priority_weight": 0, 00:07:59.991 "nvme_adminq_poll_period_us": 10000, 00:07:59.991 "nvme_ioq_poll_period_us": 0, 00:07:59.991 "io_queue_requests": 0, 00:07:59.991 "delay_cmd_submit": true, 00:07:59.991 "transport_retry_count": 4, 00:07:59.991 "bdev_retry_count": 3, 00:07:59.991 "transport_ack_timeout": 0, 00:07:59.991 "ctrlr_loss_timeout_sec": 0, 00:07:59.991 "reconnect_delay_sec": 0, 00:07:59.991 "fast_io_fail_timeout_sec": 0, 00:07:59.991 "disable_auto_failback": false, 00:07:59.991 "generate_uuids": false, 00:07:59.991 "transport_tos": 0, 00:07:59.991 "nvme_error_stat": false, 00:07:59.991 "rdma_srq_size": 0, 00:07:59.991 "io_path_stat": false, 00:07:59.991 "allow_accel_sequence": false, 00:07:59.991 "rdma_max_cq_size": 0, 00:07:59.991 "rdma_cm_event_timeout_ms": 0, 00:07:59.991 "dhchap_digests": [ 00:07:59.991 "sha256", 00:07:59.991 "sha384", 00:07:59.991 "sha512" 00:07:59.991 ], 00:07:59.991 "dhchap_dhgroups": [ 00:07:59.991 "null", 00:07:59.991 "ffdhe2048", 00:07:59.991 "ffdhe3072", 00:07:59.991 "ffdhe4096", 00:07:59.991 "ffdhe6144", 00:07:59.991 "ffdhe8192" 00:07:59.991 ] 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "bdev_nvme_set_hotplug", 00:07:59.991 "params": { 00:07:59.991 "period_us": 100000, 00:07:59.991 "enable": false 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "bdev_wait_for_examine" 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "scsi", 00:07:59.991 "config": null 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "scheduler", 00:07:59.991 "config": [ 00:07:59.991 { 00:07:59.991 "method": "framework_set_scheduler", 00:07:59.991 "params": { 00:07:59.991 "name": "static" 00:07:59.991 } 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "vhost_scsi", 00:07:59.991 "config": [] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "vhost_blk", 00:07:59.991 "config": [] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "ublk", 00:07:59.991 "config": [] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "nbd", 00:07:59.991 "config": [] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "nvmf", 00:07:59.991 "config": [ 00:07:59.991 { 00:07:59.991 "method": "nvmf_set_config", 00:07:59.991 "params": { 00:07:59.991 "discovery_filter": "match_any", 00:07:59.991 "admin_cmd_passthru": { 00:07:59.991 "identify_ctrlr": false 00:07:59.991 }, 00:07:59.991 "dhchap_digests": [ 00:07:59.991 "sha256", 00:07:59.991 "sha384", 00:07:59.991 "sha512" 00:07:59.991 ], 00:07:59.991 "dhchap_dhgroups": [ 00:07:59.991 "null", 00:07:59.991 "ffdhe2048", 00:07:59.991 "ffdhe3072", 00:07:59.991 "ffdhe4096", 00:07:59.991 "ffdhe6144", 00:07:59.991 "ffdhe8192" 00:07:59.991 ] 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "nvmf_set_max_subsystems", 00:07:59.991 "params": { 00:07:59.991 "max_subsystems": 1024 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "nvmf_set_crdt", 00:07:59.991 "params": { 00:07:59.991 "crdt1": 0, 00:07:59.991 "crdt2": 0, 00:07:59.991 "crdt3": 0 00:07:59.991 } 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "method": "nvmf_create_transport", 00:07:59.991 "params": { 00:07:59.991 "trtype": "TCP", 00:07:59.991 "max_queue_depth": 128, 00:07:59.991 "max_io_qpairs_per_ctrlr": 127, 00:07:59.991 "in_capsule_data_size": 4096, 00:07:59.991 "max_io_size": 131072, 00:07:59.991 "io_unit_size": 131072, 00:07:59.991 "max_aq_depth": 128, 00:07:59.991 "num_shared_buffers": 511, 00:07:59.991 "buf_cache_size": 4294967295, 00:07:59.991 "dif_insert_or_strip": false, 00:07:59.991 "zcopy": false, 00:07:59.991 "c2h_success": true, 00:07:59.991 "sock_priority": 0, 00:07:59.991 "abort_timeout_sec": 1, 00:07:59.991 "ack_timeout": 0, 00:07:59.991 "data_wr_pool_size": 0 00:07:59.991 } 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 }, 00:07:59.991 { 00:07:59.991 "subsystem": "iscsi", 00:07:59.991 "config": [ 00:07:59.991 { 00:07:59.991 "method": "iscsi_set_options", 00:07:59.991 "params": { 00:07:59.991 "node_base": "iqn.2016-06.io.spdk", 00:07:59.991 "max_sessions": 128, 00:07:59.991 "max_connections_per_session": 2, 00:07:59.991 "max_queue_depth": 64, 00:07:59.991 "default_time2wait": 2, 00:07:59.991 "default_time2retain": 20, 00:07:59.991 "first_burst_length": 8192, 00:07:59.991 "immediate_data": true, 00:07:59.991 "allow_duplicated_isid": false, 00:07:59.991 "error_recovery_level": 0, 00:07:59.991 "nop_timeout": 60, 00:07:59.991 "nop_in_interval": 30, 00:07:59.991 "disable_chap": false, 00:07:59.991 "require_chap": false, 00:07:59.991 "mutual_chap": false, 00:07:59.991 "chap_group": 0, 00:07:59.991 "max_large_datain_per_connection": 64, 00:07:59.991 "max_r2t_per_connection": 4, 00:07:59.991 "pdu_pool_size": 36864, 00:07:59.991 "immediate_data_pool_size": 16384, 00:07:59.991 "data_out_pool_size": 2048 00:07:59.991 } 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 } 00:07:59.991 ] 00:07:59.991 } 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57156 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57156 ']' 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57156 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.991 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57156 00:08:00.250 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.250 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.250 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57156' 00:08:00.250 killing process with pid 57156 00:08:00.250 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57156 00:08:00.250 19:27:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57156 00:08:02.777 19:27:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57212 00:08:02.777 19:27:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:02.778 19:27:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57212 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57212 ']' 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57212 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57212 00:08:08.047 killing process with pid 57212 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57212' 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57212 00:08:08.047 19:28:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57212 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:10.007 00:08:10.007 real 0m11.079s 00:08:10.007 user 0m10.543s 00:08:10.007 sys 0m1.024s 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.007 ************************************ 00:08:10.007 END TEST skip_rpc_with_json 00:08:10.007 ************************************ 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.007 19:28:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:10.007 19:28:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.007 19:28:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.007 19:28:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.007 ************************************ 00:08:10.007 START TEST skip_rpc_with_delay 00:08:10.007 ************************************ 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.007 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:10.008 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.008 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:10.008 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:10.008 19:28:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:10.008 [2024-12-05 19:28:03.123995] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.008 ************************************ 00:08:10.008 END TEST skip_rpc_with_delay 00:08:10.008 ************************************ 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.008 00:08:10.008 real 0m0.216s 00:08:10.008 user 0m0.106s 00:08:10.008 sys 0m0.108s 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.008 19:28:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:10.008 19:28:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:10.008 19:28:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:10.008 19:28:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:10.008 19:28:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.008 19:28:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.008 19:28:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.008 ************************************ 00:08:10.008 START TEST exit_on_failed_rpc_init 00:08:10.008 ************************************ 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57340 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57340 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57340 ']' 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.008 19:28:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:10.008 [2024-12-05 19:28:03.384981] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:10.008 [2024-12-05 19:28:03.385653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57340 ] 00:08:10.267 [2024-12-05 19:28:03.568970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.267 [2024-12-05 19:28:03.688508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.204 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.204 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:11.205 19:28:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:11.464 [2024-12-05 19:28:04.657595] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:11.464 [2024-12-05 19:28:04.657804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57358 ] 00:08:11.464 [2024-12-05 19:28:04.845123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.723 [2024-12-05 19:28:05.002507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.723 [2024-12-05 19:28:05.002637] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:11.723 [2024-12-05 19:28:05.002664] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:11.723 [2024-12-05 19:28:05.002695] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57340 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57340 ']' 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57340 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57340 00:08:11.982 killing process with pid 57340 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57340' 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57340 00:08:11.982 19:28:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57340 00:08:14.519 00:08:14.519 real 0m4.192s 00:08:14.519 user 0m4.627s 00:08:14.519 sys 0m0.697s 00:08:14.519 19:28:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.519 19:28:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:14.519 ************************************ 00:08:14.519 END TEST exit_on_failed_rpc_init 00:08:14.519 ************************************ 00:08:14.519 19:28:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:14.519 00:08:14.519 real 0m23.343s 00:08:14.519 user 0m22.310s 00:08:14.519 sys 0m2.529s 00:08:14.519 19:28:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.519 19:28:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.519 ************************************ 00:08:14.519 END TEST skip_rpc 00:08:14.519 ************************************ 00:08:14.519 19:28:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:14.519 19:28:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.519 19:28:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.519 19:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.519 ************************************ 00:08:14.519 START TEST rpc_client 00:08:14.519 ************************************ 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:14.519 * Looking for test storage... 00:08:14.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.519 19:28:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.519 --rc genhtml_branch_coverage=1 00:08:14.519 --rc genhtml_function_coverage=1 00:08:14.519 --rc genhtml_legend=1 00:08:14.519 --rc geninfo_all_blocks=1 00:08:14.519 --rc geninfo_unexecuted_blocks=1 00:08:14.519 00:08:14.519 ' 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.519 --rc genhtml_branch_coverage=1 00:08:14.519 --rc genhtml_function_coverage=1 00:08:14.519 --rc genhtml_legend=1 00:08:14.519 --rc geninfo_all_blocks=1 00:08:14.519 --rc geninfo_unexecuted_blocks=1 00:08:14.519 00:08:14.519 ' 00:08:14.519 19:28:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.519 --rc genhtml_branch_coverage=1 00:08:14.520 --rc genhtml_function_coverage=1 00:08:14.520 --rc genhtml_legend=1 00:08:14.520 --rc geninfo_all_blocks=1 00:08:14.520 --rc geninfo_unexecuted_blocks=1 00:08:14.520 00:08:14.520 ' 00:08:14.520 19:28:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.520 --rc genhtml_branch_coverage=1 00:08:14.520 --rc genhtml_function_coverage=1 00:08:14.520 --rc genhtml_legend=1 00:08:14.520 --rc geninfo_all_blocks=1 00:08:14.520 --rc geninfo_unexecuted_blocks=1 00:08:14.520 00:08:14.520 ' 00:08:14.520 19:28:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:14.520 OK 00:08:14.520 19:28:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:14.520 00:08:14.520 real 0m0.279s 00:08:14.520 user 0m0.166s 00:08:14.520 sys 0m0.115s 00:08:14.520 19:28:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.520 19:28:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:14.520 ************************************ 00:08:14.520 END TEST rpc_client 00:08:14.520 ************************************ 00:08:14.520 19:28:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:14.520 19:28:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.520 19:28:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.520 19:28:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.520 ************************************ 00:08:14.520 START TEST json_config 00:08:14.520 ************************************ 00:08:14.520 19:28:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:14.520 19:28:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.520 19:28:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.520 19:28:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.780 19:28:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.780 19:28:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.780 19:28:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.780 19:28:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.780 19:28:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.780 19:28:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:14.780 19:28:08 json_config -- scripts/common.sh@345 -- # : 1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.780 19:28:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.780 19:28:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@353 -- # local d=1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.780 19:28:08 json_config -- scripts/common.sh@355 -- # echo 1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.780 19:28:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@353 -- # local d=2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.780 19:28:08 json_config -- scripts/common.sh@355 -- # echo 2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.780 19:28:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.780 19:28:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.780 19:28:08 json_config -- scripts/common.sh@368 -- # return 0 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.780 --rc genhtml_branch_coverage=1 00:08:14.780 --rc genhtml_function_coverage=1 00:08:14.780 --rc genhtml_legend=1 00:08:14.780 --rc geninfo_all_blocks=1 00:08:14.780 --rc geninfo_unexecuted_blocks=1 00:08:14.780 00:08:14.780 ' 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.780 --rc genhtml_branch_coverage=1 00:08:14.780 --rc genhtml_function_coverage=1 00:08:14.780 --rc genhtml_legend=1 00:08:14.780 --rc geninfo_all_blocks=1 00:08:14.780 --rc geninfo_unexecuted_blocks=1 00:08:14.780 00:08:14.780 ' 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.780 --rc genhtml_branch_coverage=1 00:08:14.780 --rc genhtml_function_coverage=1 00:08:14.780 --rc genhtml_legend=1 00:08:14.780 --rc geninfo_all_blocks=1 00:08:14.780 --rc geninfo_unexecuted_blocks=1 00:08:14.780 00:08:14.780 ' 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.780 --rc genhtml_branch_coverage=1 00:08:14.780 --rc genhtml_function_coverage=1 00:08:14.780 --rc genhtml_legend=1 00:08:14.780 --rc geninfo_all_blocks=1 00:08:14.780 --rc geninfo_unexecuted_blocks=1 00:08:14.780 00:08:14.780 ' 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:369e65d1-545d-4691-9977-d4c00e5b0446 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=369e65d1-545d-4691-9977-d4c00e5b0446 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.780 19:28:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.780 19:28:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.780 19:28:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.780 19:28:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.780 19:28:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.780 19:28:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.780 19:28:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.780 19:28:08 json_config -- paths/export.sh@5 -- # export PATH 00:08:14.780 19:28:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@51 -- # : 0 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.780 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.780 19:28:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:14.780 WARNING: No tests are enabled so not running JSON configuration tests 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:14.780 19:28:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:14.780 ************************************ 00:08:14.780 END TEST json_config 00:08:14.780 ************************************ 00:08:14.780 00:08:14.780 real 0m0.178s 00:08:14.780 user 0m0.110s 00:08:14.780 sys 0m0.064s 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.780 19:28:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:14.780 19:28:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:14.780 19:28:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.780 19:28:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.780 19:28:08 -- common/autotest_common.sh@10 -- # set +x 00:08:14.780 ************************************ 00:08:14.780 START TEST json_config_extra_key 00:08:14.780 ************************************ 00:08:14.780 19:28:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:14.780 19:28:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:14.781 19:28:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:14.781 19:28:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.046 --rc genhtml_branch_coverage=1 00:08:15.046 --rc genhtml_function_coverage=1 00:08:15.046 --rc genhtml_legend=1 00:08:15.046 --rc geninfo_all_blocks=1 00:08:15.046 --rc geninfo_unexecuted_blocks=1 00:08:15.046 00:08:15.046 ' 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.046 --rc genhtml_branch_coverage=1 00:08:15.046 --rc genhtml_function_coverage=1 00:08:15.046 --rc genhtml_legend=1 00:08:15.046 --rc geninfo_all_blocks=1 00:08:15.046 --rc geninfo_unexecuted_blocks=1 00:08:15.046 00:08:15.046 ' 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.046 --rc genhtml_branch_coverage=1 00:08:15.046 --rc genhtml_function_coverage=1 00:08:15.046 --rc genhtml_legend=1 00:08:15.046 --rc geninfo_all_blocks=1 00:08:15.046 --rc geninfo_unexecuted_blocks=1 00:08:15.046 00:08:15.046 ' 00:08:15.046 19:28:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.046 --rc genhtml_branch_coverage=1 00:08:15.046 --rc genhtml_function_coverage=1 00:08:15.046 --rc genhtml_legend=1 00:08:15.046 --rc geninfo_all_blocks=1 00:08:15.046 --rc geninfo_unexecuted_blocks=1 00:08:15.046 00:08:15.046 ' 00:08:15.046 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:369e65d1-545d-4691-9977-d4c00e5b0446 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=369e65d1-545d-4691-9977-d4c00e5b0446 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.046 19:28:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.046 19:28:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.046 19:28:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.046 19:28:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.046 19:28:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.047 19:28:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:15.047 19:28:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.047 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.047 19:28:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:15.047 INFO: launching applications... 00:08:15.047 Waiting for target to run... 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:15.047 19:28:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57568 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57568 /var/tmp/spdk_tgt.sock 00:08:15.047 19:28:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57568 ']' 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:15.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.047 19:28:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:15.047 [2024-12-05 19:28:08.428274] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:15.047 [2024-12-05 19:28:08.428743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57568 ] 00:08:15.614 [2024-12-05 19:28:08.905786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.614 [2024-12-05 19:28:09.048325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.553 19:28:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.553 00:08:16.553 INFO: shutting down applications... 00:08:16.553 19:28:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:16.553 19:28:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:16.553 19:28:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57568 ]] 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57568 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:16.553 19:28:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:16.812 19:28:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:16.812 19:28:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.812 19:28:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:16.812 19:28:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:17.380 19:28:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:17.380 19:28:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:17.380 19:28:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:17.380 19:28:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:17.948 19:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:17.948 19:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:17.948 19:28:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:17.948 19:28:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:18.516 19:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:18.516 19:28:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:18.516 19:28:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:18.516 19:28:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57568 00:08:19.084 SPDK target shutdown done 00:08:19.084 Success 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:19.084 19:28:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:19.084 19:28:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:19.084 00:08:19.084 real 0m4.116s 00:08:19.084 user 0m3.772s 00:08:19.084 sys 0m0.648s 00:08:19.084 ************************************ 00:08:19.084 END TEST json_config_extra_key 00:08:19.084 ************************************ 00:08:19.084 19:28:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.084 19:28:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:19.084 19:28:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:19.084 19:28:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.084 19:28:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.084 19:28:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.084 ************************************ 00:08:19.084 START TEST alias_rpc 00:08:19.084 ************************************ 00:08:19.084 19:28:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:19.084 * Looking for test storage... 00:08:19.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:19.084 19:28:12 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.084 19:28:12 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.084 19:28:12 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.084 19:28:12 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.084 19:28:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.085 19:28:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.085 19:28:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.085 --rc genhtml_branch_coverage=1 00:08:19.085 --rc genhtml_function_coverage=1 00:08:19.085 --rc genhtml_legend=1 00:08:19.085 --rc geninfo_all_blocks=1 00:08:19.085 --rc geninfo_unexecuted_blocks=1 00:08:19.085 00:08:19.085 ' 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.085 --rc genhtml_branch_coverage=1 00:08:19.085 --rc genhtml_function_coverage=1 00:08:19.085 --rc genhtml_legend=1 00:08:19.085 --rc geninfo_all_blocks=1 00:08:19.085 --rc geninfo_unexecuted_blocks=1 00:08:19.085 00:08:19.085 ' 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.085 --rc genhtml_branch_coverage=1 00:08:19.085 --rc genhtml_function_coverage=1 00:08:19.085 --rc genhtml_legend=1 00:08:19.085 --rc geninfo_all_blocks=1 00:08:19.085 --rc geninfo_unexecuted_blocks=1 00:08:19.085 00:08:19.085 ' 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.085 --rc genhtml_branch_coverage=1 00:08:19.085 --rc genhtml_function_coverage=1 00:08:19.085 --rc genhtml_legend=1 00:08:19.085 --rc geninfo_all_blocks=1 00:08:19.085 --rc geninfo_unexecuted_blocks=1 00:08:19.085 00:08:19.085 ' 00:08:19.085 19:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.085 19:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57673 00:08:19.085 19:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57673 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57673 ']' 00:08:19.085 19:28:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.085 19:28:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.344 [2024-12-05 19:28:12.611759] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:19.344 [2024-12-05 19:28:12.612209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57673 ] 00:08:19.602 [2024-12-05 19:28:12.800171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.602 [2024-12-05 19:28:12.957254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.537 19:28:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.537 19:28:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.537 19:28:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:20.796 19:28:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57673 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57673 ']' 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57673 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57673 00:08:20.796 killing process with pid 57673 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57673' 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 57673 00:08:20.796 19:28:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 57673 00:08:23.333 ************************************ 00:08:23.333 END TEST alias_rpc 00:08:23.333 ************************************ 00:08:23.333 00:08:23.333 real 0m3.986s 00:08:23.333 user 0m4.119s 00:08:23.333 sys 0m0.635s 00:08:23.333 19:28:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.333 19:28:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.333 19:28:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:23.333 19:28:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:23.333 19:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.333 19:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.333 19:28:16 -- common/autotest_common.sh@10 -- # set +x 00:08:23.333 ************************************ 00:08:23.333 START TEST spdkcli_tcp 00:08:23.333 ************************************ 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:23.333 * Looking for test storage... 00:08:23.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.333 19:28:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.333 --rc genhtml_branch_coverage=1 00:08:23.333 --rc genhtml_function_coverage=1 00:08:23.333 --rc genhtml_legend=1 00:08:23.333 --rc geninfo_all_blocks=1 00:08:23.333 --rc geninfo_unexecuted_blocks=1 00:08:23.333 00:08:23.333 ' 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.333 --rc genhtml_branch_coverage=1 00:08:23.333 --rc genhtml_function_coverage=1 00:08:23.333 --rc genhtml_legend=1 00:08:23.333 --rc geninfo_all_blocks=1 00:08:23.333 --rc geninfo_unexecuted_blocks=1 00:08:23.333 00:08:23.333 ' 00:08:23.333 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.333 --rc genhtml_branch_coverage=1 00:08:23.333 --rc genhtml_function_coverage=1 00:08:23.333 --rc genhtml_legend=1 00:08:23.333 --rc geninfo_all_blocks=1 00:08:23.334 --rc geninfo_unexecuted_blocks=1 00:08:23.334 00:08:23.334 ' 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.334 --rc genhtml_branch_coverage=1 00:08:23.334 --rc genhtml_function_coverage=1 00:08:23.334 --rc genhtml_legend=1 00:08:23.334 --rc geninfo_all_blocks=1 00:08:23.334 --rc geninfo_unexecuted_blocks=1 00:08:23.334 00:08:23.334 ' 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57780 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57780 00:08:23.334 19:28:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57780 ']' 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.334 19:28:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.334 [2024-12-05 19:28:16.653288] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:23.334 [2024-12-05 19:28:16.653805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:08:23.593 [2024-12-05 19:28:16.837877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.593 [2024-12-05 19:28:16.982810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.593 [2024-12-05 19:28:16.982831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.528 19:28:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.528 19:28:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:24.528 19:28:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57797 00:08:24.528 19:28:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:24.528 19:28:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:24.787 [ 00:08:24.787 "bdev_malloc_delete", 00:08:24.787 "bdev_malloc_create", 00:08:24.787 "bdev_null_resize", 00:08:24.787 "bdev_null_delete", 00:08:24.787 "bdev_null_create", 00:08:24.787 "bdev_nvme_cuse_unregister", 00:08:24.787 "bdev_nvme_cuse_register", 00:08:24.787 "bdev_opal_new_user", 00:08:24.787 "bdev_opal_set_lock_state", 00:08:24.787 "bdev_opal_delete", 00:08:24.787 "bdev_opal_get_info", 00:08:24.787 "bdev_opal_create", 00:08:24.787 "bdev_nvme_opal_revert", 00:08:24.787 "bdev_nvme_opal_init", 00:08:24.787 "bdev_nvme_send_cmd", 00:08:24.787 "bdev_nvme_set_keys", 00:08:24.787 "bdev_nvme_get_path_iostat", 00:08:24.787 "bdev_nvme_get_mdns_discovery_info", 00:08:24.787 "bdev_nvme_stop_mdns_discovery", 00:08:24.787 "bdev_nvme_start_mdns_discovery", 00:08:24.787 "bdev_nvme_set_multipath_policy", 00:08:24.787 "bdev_nvme_set_preferred_path", 00:08:24.787 "bdev_nvme_get_io_paths", 00:08:24.787 "bdev_nvme_remove_error_injection", 00:08:24.787 "bdev_nvme_add_error_injection", 00:08:24.787 "bdev_nvme_get_discovery_info", 00:08:24.787 "bdev_nvme_stop_discovery", 00:08:24.787 "bdev_nvme_start_discovery", 00:08:24.787 "bdev_nvme_get_controller_health_info", 00:08:24.787 "bdev_nvme_disable_controller", 00:08:24.787 "bdev_nvme_enable_controller", 00:08:24.787 "bdev_nvme_reset_controller", 00:08:24.787 "bdev_nvme_get_transport_statistics", 00:08:24.787 "bdev_nvme_apply_firmware", 00:08:24.787 "bdev_nvme_detach_controller", 00:08:24.787 "bdev_nvme_get_controllers", 00:08:24.787 "bdev_nvme_attach_controller", 00:08:24.787 "bdev_nvme_set_hotplug", 00:08:24.787 "bdev_nvme_set_options", 00:08:24.787 "bdev_passthru_delete", 00:08:24.787 "bdev_passthru_create", 00:08:24.787 "bdev_lvol_set_parent_bdev", 00:08:24.787 "bdev_lvol_set_parent", 00:08:24.787 "bdev_lvol_check_shallow_copy", 00:08:24.787 "bdev_lvol_start_shallow_copy", 00:08:24.787 "bdev_lvol_grow_lvstore", 00:08:24.787 "bdev_lvol_get_lvols", 00:08:24.787 "bdev_lvol_get_lvstores", 00:08:24.787 "bdev_lvol_delete", 00:08:24.787 "bdev_lvol_set_read_only", 00:08:24.787 "bdev_lvol_resize", 00:08:24.787 "bdev_lvol_decouple_parent", 00:08:24.787 "bdev_lvol_inflate", 00:08:24.787 "bdev_lvol_rename", 00:08:24.787 "bdev_lvol_clone_bdev", 00:08:24.787 "bdev_lvol_clone", 00:08:24.787 "bdev_lvol_snapshot", 00:08:24.787 "bdev_lvol_create", 00:08:24.787 "bdev_lvol_delete_lvstore", 00:08:24.787 "bdev_lvol_rename_lvstore", 00:08:24.787 "bdev_lvol_create_lvstore", 00:08:24.787 "bdev_raid_set_options", 00:08:24.787 "bdev_raid_remove_base_bdev", 00:08:24.787 "bdev_raid_add_base_bdev", 00:08:24.787 "bdev_raid_delete", 00:08:24.787 "bdev_raid_create", 00:08:24.787 "bdev_raid_get_bdevs", 00:08:24.787 "bdev_error_inject_error", 00:08:24.787 "bdev_error_delete", 00:08:24.787 "bdev_error_create", 00:08:24.787 "bdev_split_delete", 00:08:24.787 "bdev_split_create", 00:08:24.787 "bdev_delay_delete", 00:08:24.787 "bdev_delay_create", 00:08:24.787 "bdev_delay_update_latency", 00:08:24.787 "bdev_zone_block_delete", 00:08:24.787 "bdev_zone_block_create", 00:08:24.787 "blobfs_create", 00:08:24.787 "blobfs_detect", 00:08:24.787 "blobfs_set_cache_size", 00:08:24.787 "bdev_aio_delete", 00:08:24.787 "bdev_aio_rescan", 00:08:24.787 "bdev_aio_create", 00:08:24.787 "bdev_ftl_set_property", 00:08:24.787 "bdev_ftl_get_properties", 00:08:24.787 "bdev_ftl_get_stats", 00:08:24.787 "bdev_ftl_unmap", 00:08:24.787 "bdev_ftl_unload", 00:08:24.787 "bdev_ftl_delete", 00:08:24.787 "bdev_ftl_load", 00:08:24.787 "bdev_ftl_create", 00:08:24.787 "bdev_virtio_attach_controller", 00:08:24.787 "bdev_virtio_scsi_get_devices", 00:08:24.787 "bdev_virtio_detach_controller", 00:08:24.787 "bdev_virtio_blk_set_hotplug", 00:08:24.787 "bdev_iscsi_delete", 00:08:24.787 "bdev_iscsi_create", 00:08:24.787 "bdev_iscsi_set_options", 00:08:24.787 "accel_error_inject_error", 00:08:24.787 "ioat_scan_accel_module", 00:08:24.787 "dsa_scan_accel_module", 00:08:24.787 "iaa_scan_accel_module", 00:08:24.787 "keyring_file_remove_key", 00:08:24.787 "keyring_file_add_key", 00:08:24.787 "keyring_linux_set_options", 00:08:24.787 "fsdev_aio_delete", 00:08:24.787 "fsdev_aio_create", 00:08:24.787 "iscsi_get_histogram", 00:08:24.787 "iscsi_enable_histogram", 00:08:24.787 "iscsi_set_options", 00:08:24.787 "iscsi_get_auth_groups", 00:08:24.787 "iscsi_auth_group_remove_secret", 00:08:24.787 "iscsi_auth_group_add_secret", 00:08:24.787 "iscsi_delete_auth_group", 00:08:24.787 "iscsi_create_auth_group", 00:08:24.787 "iscsi_set_discovery_auth", 00:08:24.787 "iscsi_get_options", 00:08:24.787 "iscsi_target_node_request_logout", 00:08:24.787 "iscsi_target_node_set_redirect", 00:08:24.787 "iscsi_target_node_set_auth", 00:08:24.787 "iscsi_target_node_add_lun", 00:08:24.787 "iscsi_get_stats", 00:08:24.787 "iscsi_get_connections", 00:08:24.787 "iscsi_portal_group_set_auth", 00:08:24.787 "iscsi_start_portal_group", 00:08:24.787 "iscsi_delete_portal_group", 00:08:24.787 "iscsi_create_portal_group", 00:08:24.787 "iscsi_get_portal_groups", 00:08:24.787 "iscsi_delete_target_node", 00:08:24.787 "iscsi_target_node_remove_pg_ig_maps", 00:08:24.787 "iscsi_target_node_add_pg_ig_maps", 00:08:24.787 "iscsi_create_target_node", 00:08:24.787 "iscsi_get_target_nodes", 00:08:24.787 "iscsi_delete_initiator_group", 00:08:24.787 "iscsi_initiator_group_remove_initiators", 00:08:24.787 "iscsi_initiator_group_add_initiators", 00:08:24.787 "iscsi_create_initiator_group", 00:08:24.787 "iscsi_get_initiator_groups", 00:08:24.787 "nvmf_set_crdt", 00:08:24.787 "nvmf_set_config", 00:08:24.787 "nvmf_set_max_subsystems", 00:08:24.787 "nvmf_stop_mdns_prr", 00:08:24.787 "nvmf_publish_mdns_prr", 00:08:24.787 "nvmf_subsystem_get_listeners", 00:08:24.787 "nvmf_subsystem_get_qpairs", 00:08:24.787 "nvmf_subsystem_get_controllers", 00:08:24.787 "nvmf_get_stats", 00:08:24.787 "nvmf_get_transports", 00:08:24.787 "nvmf_create_transport", 00:08:24.787 "nvmf_get_targets", 00:08:24.787 "nvmf_delete_target", 00:08:24.787 "nvmf_create_target", 00:08:24.787 "nvmf_subsystem_allow_any_host", 00:08:24.787 "nvmf_subsystem_set_keys", 00:08:24.787 "nvmf_subsystem_remove_host", 00:08:24.787 "nvmf_subsystem_add_host", 00:08:24.787 "nvmf_ns_remove_host", 00:08:24.787 "nvmf_ns_add_host", 00:08:24.787 "nvmf_subsystem_remove_ns", 00:08:24.787 "nvmf_subsystem_set_ns_ana_group", 00:08:24.787 "nvmf_subsystem_add_ns", 00:08:24.787 "nvmf_subsystem_listener_set_ana_state", 00:08:24.787 "nvmf_discovery_get_referrals", 00:08:24.787 "nvmf_discovery_remove_referral", 00:08:24.787 "nvmf_discovery_add_referral", 00:08:24.787 "nvmf_subsystem_remove_listener", 00:08:24.787 "nvmf_subsystem_add_listener", 00:08:24.787 "nvmf_delete_subsystem", 00:08:24.787 "nvmf_create_subsystem", 00:08:24.787 "nvmf_get_subsystems", 00:08:24.787 "env_dpdk_get_mem_stats", 00:08:24.787 "nbd_get_disks", 00:08:24.787 "nbd_stop_disk", 00:08:24.787 "nbd_start_disk", 00:08:24.787 "ublk_recover_disk", 00:08:24.787 "ublk_get_disks", 00:08:24.787 "ublk_stop_disk", 00:08:24.787 "ublk_start_disk", 00:08:24.787 "ublk_destroy_target", 00:08:24.787 "ublk_create_target", 00:08:24.787 "virtio_blk_create_transport", 00:08:24.787 "virtio_blk_get_transports", 00:08:24.787 "vhost_controller_set_coalescing", 00:08:24.787 "vhost_get_controllers", 00:08:24.787 "vhost_delete_controller", 00:08:24.788 "vhost_create_blk_controller", 00:08:24.788 "vhost_scsi_controller_remove_target", 00:08:24.788 "vhost_scsi_controller_add_target", 00:08:24.788 "vhost_start_scsi_controller", 00:08:24.788 "vhost_create_scsi_controller", 00:08:24.788 "thread_set_cpumask", 00:08:24.788 "scheduler_set_options", 00:08:24.788 "framework_get_governor", 00:08:24.788 "framework_get_scheduler", 00:08:24.788 "framework_set_scheduler", 00:08:24.788 "framework_get_reactors", 00:08:24.788 "thread_get_io_channels", 00:08:24.788 "thread_get_pollers", 00:08:24.788 "thread_get_stats", 00:08:24.788 "framework_monitor_context_switch", 00:08:24.788 "spdk_kill_instance", 00:08:24.788 "log_enable_timestamps", 00:08:24.788 "log_get_flags", 00:08:24.788 "log_clear_flag", 00:08:24.788 "log_set_flag", 00:08:24.788 "log_get_level", 00:08:24.788 "log_set_level", 00:08:24.788 "log_get_print_level", 00:08:24.788 "log_set_print_level", 00:08:24.788 "framework_enable_cpumask_locks", 00:08:24.788 "framework_disable_cpumask_locks", 00:08:24.788 "framework_wait_init", 00:08:24.788 "framework_start_init", 00:08:24.788 "scsi_get_devices", 00:08:24.788 "bdev_get_histogram", 00:08:24.788 "bdev_enable_histogram", 00:08:24.788 "bdev_set_qos_limit", 00:08:24.788 "bdev_set_qd_sampling_period", 00:08:24.788 "bdev_get_bdevs", 00:08:24.788 "bdev_reset_iostat", 00:08:24.788 "bdev_get_iostat", 00:08:24.788 "bdev_examine", 00:08:24.788 "bdev_wait_for_examine", 00:08:24.788 "bdev_set_options", 00:08:24.788 "accel_get_stats", 00:08:24.788 "accel_set_options", 00:08:24.788 "accel_set_driver", 00:08:24.788 "accel_crypto_key_destroy", 00:08:24.788 "accel_crypto_keys_get", 00:08:24.788 "accel_crypto_key_create", 00:08:24.788 "accel_assign_opc", 00:08:24.788 "accel_get_module_info", 00:08:24.788 "accel_get_opc_assignments", 00:08:24.788 "vmd_rescan", 00:08:24.788 "vmd_remove_device", 00:08:24.788 "vmd_enable", 00:08:24.788 "sock_get_default_impl", 00:08:24.788 "sock_set_default_impl", 00:08:24.788 "sock_impl_set_options", 00:08:24.788 "sock_impl_get_options", 00:08:24.788 "iobuf_get_stats", 00:08:24.788 "iobuf_set_options", 00:08:24.788 "keyring_get_keys", 00:08:24.788 "framework_get_pci_devices", 00:08:24.788 "framework_get_config", 00:08:24.788 "framework_get_subsystems", 00:08:24.788 "fsdev_set_opts", 00:08:24.788 "fsdev_get_opts", 00:08:24.788 "trace_get_info", 00:08:24.788 "trace_get_tpoint_group_mask", 00:08:24.788 "trace_disable_tpoint_group", 00:08:24.788 "trace_enable_tpoint_group", 00:08:24.788 "trace_clear_tpoint_mask", 00:08:24.788 "trace_set_tpoint_mask", 00:08:24.788 "notify_get_notifications", 00:08:24.788 "notify_get_types", 00:08:24.788 "spdk_get_version", 00:08:24.788 "rpc_get_methods" 00:08:24.788 ] 00:08:24.788 19:28:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.788 19:28:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:24.788 19:28:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57780 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57780 ']' 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57780 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57780 00:08:24.788 killing process with pid 57780 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57780' 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57780 00:08:24.788 19:28:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57780 00:08:27.344 ************************************ 00:08:27.344 END TEST spdkcli_tcp 00:08:27.344 ************************************ 00:08:27.344 00:08:27.344 real 0m4.060s 00:08:27.344 user 0m7.260s 00:08:27.344 sys 0m0.665s 00:08:27.344 19:28:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.344 19:28:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.344 19:28:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:27.344 19:28:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.344 19:28:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.344 19:28:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.344 ************************************ 00:08:27.344 START TEST dpdk_mem_utility 00:08:27.344 ************************************ 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:27.344 * Looking for test storage... 00:08:27.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.344 19:28:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.344 --rc genhtml_branch_coverage=1 00:08:27.344 --rc genhtml_function_coverage=1 00:08:27.344 --rc genhtml_legend=1 00:08:27.344 --rc geninfo_all_blocks=1 00:08:27.344 --rc geninfo_unexecuted_blocks=1 00:08:27.344 00:08:27.344 ' 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.344 --rc genhtml_branch_coverage=1 00:08:27.344 --rc genhtml_function_coverage=1 00:08:27.344 --rc genhtml_legend=1 00:08:27.344 --rc geninfo_all_blocks=1 00:08:27.344 --rc geninfo_unexecuted_blocks=1 00:08:27.344 00:08:27.344 ' 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.344 --rc genhtml_branch_coverage=1 00:08:27.344 --rc genhtml_function_coverage=1 00:08:27.344 --rc genhtml_legend=1 00:08:27.344 --rc geninfo_all_blocks=1 00:08:27.344 --rc geninfo_unexecuted_blocks=1 00:08:27.344 00:08:27.344 ' 00:08:27.344 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.344 --rc genhtml_branch_coverage=1 00:08:27.344 --rc genhtml_function_coverage=1 00:08:27.344 --rc genhtml_legend=1 00:08:27.344 --rc geninfo_all_blocks=1 00:08:27.344 --rc geninfo_unexecuted_blocks=1 00:08:27.344 00:08:27.344 ' 00:08:27.345 19:28:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:27.345 19:28:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57902 00:08:27.345 19:28:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.345 19:28:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57902 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57902 ']' 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.345 19:28:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:27.345 [2024-12-05 19:28:20.719092] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:27.345 [2024-12-05 19:28:20.719538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57902 ] 00:08:27.604 [2024-12-05 19:28:20.891640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.604 [2024-12-05 19:28:21.009213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.540 19:28:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.540 19:28:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:28.540 19:28:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:28.540 19:28:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:28.540 19:28:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.540 19:28:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:28.540 { 00:08:28.540 "filename": "/tmp/spdk_mem_dump.txt" 00:08:28.540 } 00:08:28.540 19:28:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.540 19:28:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:28.540 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:28.540 1 heaps totaling size 824.000000 MiB 00:08:28.540 size: 824.000000 MiB heap id: 0 00:08:28.540 end heaps---------- 00:08:28.540 9 mempools totaling size 603.782043 MiB 00:08:28.540 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:28.540 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:28.540 size: 100.555481 MiB name: bdev_io_57902 00:08:28.540 size: 50.003479 MiB name: msgpool_57902 00:08:28.540 size: 36.509338 MiB name: fsdev_io_57902 00:08:28.540 size: 21.763794 MiB name: PDU_Pool 00:08:28.540 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:28.540 size: 4.133484 MiB name: evtpool_57902 00:08:28.540 size: 0.026123 MiB name: Session_Pool 00:08:28.540 end mempools------- 00:08:28.540 6 memzones totaling size 4.142822 MiB 00:08:28.540 size: 1.000366 MiB name: RG_ring_0_57902 00:08:28.540 size: 1.000366 MiB name: RG_ring_1_57902 00:08:28.540 size: 1.000366 MiB name: RG_ring_4_57902 00:08:28.540 size: 1.000366 MiB name: RG_ring_5_57902 00:08:28.540 size: 0.125366 MiB name: RG_ring_2_57902 00:08:28.540 size: 0.015991 MiB name: RG_ring_3_57902 00:08:28.540 end memzones------- 00:08:28.540 19:28:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:28.801 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:08:28.801 list of free elements. size: 16.781372 MiB 00:08:28.801 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:28.801 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:28.801 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:28.801 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:28.801 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:28.801 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:28.801 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:28.801 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:28.801 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:28.801 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:28.801 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:28.801 element at address: 0x20001b400000 with size: 0.562683 MiB 00:08:28.801 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:28.801 element at address: 0x200019600000 with size: 0.488220 MiB 00:08:28.801 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:28.801 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:28.801 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:28.801 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:28.801 list of standard malloc elements. size: 199.287720 MiB 00:08:28.801 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:28.801 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:28.801 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:28.801 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:28.801 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:28.801 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:28.801 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:28.801 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:28.801 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:28.801 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:28.801 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:28.801 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:28.801 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:28.802 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:28.803 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:28.803 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:28.803 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:28.803 list of memzone associated elements. size: 607.930908 MiB 00:08:28.803 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:28.803 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:28.803 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:28.803 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:28.803 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:28.803 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57902_0 00:08:28.803 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:28.803 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57902_0 00:08:28.803 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:28.803 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57902_0 00:08:28.803 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:28.803 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:28.803 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:28.803 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:28.803 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:28.803 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57902_0 00:08:28.803 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:28.803 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57902 00:08:28.803 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:28.803 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57902 00:08:28.803 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:28.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:28.803 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:28.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:28.803 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:28.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:28.803 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:28.803 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:28.803 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:28.803 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57902 00:08:28.803 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:28.803 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57902 00:08:28.803 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:28.803 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57902 00:08:28.803 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:28.803 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57902 00:08:28.803 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:28.803 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57902 00:08:28.803 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:28.803 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57902 00:08:28.803 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:28.803 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:28.803 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:28.803 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:28.804 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:28.804 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:28.804 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:28.804 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57902 00:08:28.804 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:28.804 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57902 00:08:28.804 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:28.804 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:28.804 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:28.804 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:28.804 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:28.804 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57902 00:08:28.804 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:28.804 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:28.804 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:28.804 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57902 00:08:28.804 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:28.804 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57902 00:08:28.804 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:28.804 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57902 00:08:28.804 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:28.804 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:28.804 19:28:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:28.804 19:28:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57902 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57902 ']' 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57902 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57902 00:08:28.804 killing process with pid 57902 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57902' 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57902 00:08:28.804 19:28:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57902 00:08:31.336 00:08:31.336 real 0m3.754s 00:08:31.336 user 0m3.802s 00:08:31.336 sys 0m0.600s 00:08:31.336 19:28:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.336 19:28:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:31.336 ************************************ 00:08:31.336 END TEST dpdk_mem_utility 00:08:31.336 ************************************ 00:08:31.336 19:28:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.336 19:28:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.336 19:28:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.336 19:28:24 -- common/autotest_common.sh@10 -- # set +x 00:08:31.336 ************************************ 00:08:31.336 START TEST event 00:08:31.336 ************************************ 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.336 * Looking for test storage... 00:08:31.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.336 19:28:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.336 19:28:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.336 19:28:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.336 19:28:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.336 19:28:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.336 19:28:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.336 19:28:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.336 19:28:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.336 19:28:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.336 19:28:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.336 19:28:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.336 19:28:24 event -- scripts/common.sh@344 -- # case "$op" in 00:08:31.336 19:28:24 event -- scripts/common.sh@345 -- # : 1 00:08:31.336 19:28:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.336 19:28:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.336 19:28:24 event -- scripts/common.sh@365 -- # decimal 1 00:08:31.336 19:28:24 event -- scripts/common.sh@353 -- # local d=1 00:08:31.336 19:28:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.336 19:28:24 event -- scripts/common.sh@355 -- # echo 1 00:08:31.336 19:28:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.336 19:28:24 event -- scripts/common.sh@366 -- # decimal 2 00:08:31.336 19:28:24 event -- scripts/common.sh@353 -- # local d=2 00:08:31.336 19:28:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.336 19:28:24 event -- scripts/common.sh@355 -- # echo 2 00:08:31.336 19:28:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.336 19:28:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.336 19:28:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.336 19:28:24 event -- scripts/common.sh@368 -- # return 0 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.336 19:28:24 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.336 --rc genhtml_branch_coverage=1 00:08:31.336 --rc genhtml_function_coverage=1 00:08:31.336 --rc genhtml_legend=1 00:08:31.336 --rc geninfo_all_blocks=1 00:08:31.337 --rc geninfo_unexecuted_blocks=1 00:08:31.337 00:08:31.337 ' 00:08:31.337 19:28:24 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.337 --rc genhtml_branch_coverage=1 00:08:31.337 --rc genhtml_function_coverage=1 00:08:31.337 --rc genhtml_legend=1 00:08:31.337 --rc geninfo_all_blocks=1 00:08:31.337 --rc geninfo_unexecuted_blocks=1 00:08:31.337 00:08:31.337 ' 00:08:31.337 19:28:24 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.337 --rc genhtml_branch_coverage=1 00:08:31.337 --rc genhtml_function_coverage=1 00:08:31.337 --rc genhtml_legend=1 00:08:31.337 --rc geninfo_all_blocks=1 00:08:31.337 --rc geninfo_unexecuted_blocks=1 00:08:31.337 00:08:31.337 ' 00:08:31.337 19:28:24 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.337 --rc genhtml_branch_coverage=1 00:08:31.337 --rc genhtml_function_coverage=1 00:08:31.337 --rc genhtml_legend=1 00:08:31.337 --rc geninfo_all_blocks=1 00:08:31.337 --rc geninfo_unexecuted_blocks=1 00:08:31.337 00:08:31.337 ' 00:08:31.337 19:28:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:31.337 19:28:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:31.337 19:28:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.337 19:28:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:31.337 19:28:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.337 19:28:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 ************************************ 00:08:31.337 START TEST event_perf 00:08:31.337 ************************************ 00:08:31.337 19:28:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.337 Running I/O for 1 seconds...[2024-12-05 19:28:24.495831] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:31.337 [2024-12-05 19:28:24.496193] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58010 ] 00:08:31.337 [2024-12-05 19:28:24.681774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.595 [2024-12-05 19:28:24.813517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.595 [2024-12-05 19:28:24.813614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.595 Running I/O for 1 seconds...[2024-12-05 19:28:24.813764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.595 [2024-12-05 19:28:24.813783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.976 00:08:32.976 lcore 0: 196201 00:08:32.976 lcore 1: 196198 00:08:32.976 lcore 2: 196200 00:08:32.976 lcore 3: 196202 00:08:32.976 done. 00:08:32.976 00:08:32.976 real 0m1.608s 00:08:32.976 user 0m4.370s 00:08:32.976 sys 0m0.112s 00:08:32.976 ************************************ 00:08:32.976 END TEST event_perf 00:08:32.976 ************************************ 00:08:32.976 19:28:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.976 19:28:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.976 19:28:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:32.976 19:28:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.976 19:28:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.976 19:28:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.976 ************************************ 00:08:32.976 START TEST event_reactor 00:08:32.976 ************************************ 00:08:32.976 19:28:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:32.976 [2024-12-05 19:28:26.150196] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:32.976 [2024-12-05 19:28:26.150503] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:08:32.976 [2024-12-05 19:28:26.322540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.234 [2024-12-05 19:28:26.455527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.610 test_start 00:08:34.610 oneshot 00:08:34.610 tick 100 00:08:34.610 tick 100 00:08:34.610 tick 250 00:08:34.610 tick 100 00:08:34.610 tick 100 00:08:34.610 tick 100 00:08:34.610 tick 250 00:08:34.610 tick 500 00:08:34.610 tick 100 00:08:34.610 tick 100 00:08:34.610 tick 250 00:08:34.610 tick 100 00:08:34.610 tick 100 00:08:34.610 test_end 00:08:34.610 00:08:34.610 real 0m1.573s 00:08:34.610 user 0m1.374s 00:08:34.610 sys 0m0.089s 00:08:34.610 19:28:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.610 ************************************ 00:08:34.610 END TEST event_reactor 00:08:34.610 ************************************ 00:08:34.610 19:28:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:34.610 19:28:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.610 19:28:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:34.610 19:28:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.610 19:28:27 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.610 ************************************ 00:08:34.610 START TEST event_reactor_perf 00:08:34.610 ************************************ 00:08:34.610 19:28:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.610 [2024-12-05 19:28:27.780955] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:34.610 [2024-12-05 19:28:27.781143] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58086 ] 00:08:34.611 [2024-12-05 19:28:27.974826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.869 [2024-12-05 19:28:28.142827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.243 test_start 00:08:36.243 test_end 00:08:36.243 Performance: 293925 events per second 00:08:36.243 00:08:36.243 real 0m1.631s 00:08:36.243 user 0m1.403s 00:08:36.243 sys 0m0.116s 00:08:36.243 19:28:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.243 ************************************ 00:08:36.243 END TEST event_reactor_perf 00:08:36.243 ************************************ 00:08:36.243 19:28:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.243 19:28:29 event -- event/event.sh@49 -- # uname -s 00:08:36.243 19:28:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:36.243 19:28:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:36.243 19:28:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.243 19:28:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.243 19:28:29 event -- common/autotest_common.sh@10 -- # set +x 00:08:36.243 ************************************ 00:08:36.243 START TEST event_scheduler 00:08:36.243 ************************************ 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:36.243 * Looking for test storage... 00:08:36.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.243 19:28:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.243 19:28:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.243 --rc genhtml_branch_coverage=1 00:08:36.243 --rc genhtml_function_coverage=1 00:08:36.243 --rc genhtml_legend=1 00:08:36.243 --rc geninfo_all_blocks=1 00:08:36.243 --rc geninfo_unexecuted_blocks=1 00:08:36.243 00:08:36.243 ' 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.244 --rc genhtml_branch_coverage=1 00:08:36.244 --rc genhtml_function_coverage=1 00:08:36.244 --rc genhtml_legend=1 00:08:36.244 --rc geninfo_all_blocks=1 00:08:36.244 --rc geninfo_unexecuted_blocks=1 00:08:36.244 00:08:36.244 ' 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.244 --rc genhtml_branch_coverage=1 00:08:36.244 --rc genhtml_function_coverage=1 00:08:36.244 --rc genhtml_legend=1 00:08:36.244 --rc geninfo_all_blocks=1 00:08:36.244 --rc geninfo_unexecuted_blocks=1 00:08:36.244 00:08:36.244 ' 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.244 --rc genhtml_branch_coverage=1 00:08:36.244 --rc genhtml_function_coverage=1 00:08:36.244 --rc genhtml_legend=1 00:08:36.244 --rc geninfo_all_blocks=1 00:08:36.244 --rc geninfo_unexecuted_blocks=1 00:08:36.244 00:08:36.244 ' 00:08:36.244 19:28:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:36.244 19:28:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:36.244 19:28:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58161 00:08:36.244 19:28:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.244 19:28:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58161 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.244 19:28:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.502 [2024-12-05 19:28:29.729351] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:36.502 [2024-12-05 19:28:29.729850] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58161 ] 00:08:36.502 [2024-12-05 19:28:29.920885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.760 [2024-12-05 19:28:30.064383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.760 [2024-12-05 19:28:30.064479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.760 [2024-12-05 19:28:30.064610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.760 [2024-12-05 19:28:30.064631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.694 19:28:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.694 19:28:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:37.695 19:28:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:37.695 19:28:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:28:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.695 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.695 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.695 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.695 POWER: Cannot set governor of lcore 0 to performance 00:08:37.695 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.695 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.695 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.695 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.695 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:37.695 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:37.695 POWER: Unable to set Power Management Environment for lcore 0 00:08:37.695 [2024-12-05 19:28:30.803986] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:37.695 [2024-12-05 19:28:30.804017] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:37.695 [2024-12-05 19:28:30.804032] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:37.695 [2024-12-05 19:28:30.804059] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:37.695 [2024-12-05 19:28:30.804072] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:37.695 [2024-12-05 19:28:30.804086] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:37.695 19:28:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.695 19:28:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:37.695 19:28:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.695 19:28:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 [2024-12-05 19:28:31.141626] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:37.952 19:28:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:37.952 19:28:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.952 19:28:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 ************************************ 00:08:37.952 START TEST scheduler_create_thread 00:08:37.952 ************************************ 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 2 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 3 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 4 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 5 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 6 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 7 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 8 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 9 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 10 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.952 19:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.326 19:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.326 19:28:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:39.326 19:28:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:39.326 19:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.326 19:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.703 ************************************ 00:08:40.703 END TEST scheduler_create_thread 00:08:40.703 ************************************ 00:08:40.703 19:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.703 00:08:40.703 real 0m2.620s 00:08:40.703 user 0m0.016s 00:08:40.703 sys 0m0.009s 00:08:40.703 19:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.703 19:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.703 19:28:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:40.703 19:28:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58161 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58161 ']' 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58161 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58161 00:08:40.703 killing process with pid 58161 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58161' 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58161 00:08:40.703 19:28:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58161 00:08:40.963 [2024-12-05 19:28:34.259591] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:41.901 00:08:41.901 real 0m5.849s 00:08:41.901 user 0m10.454s 00:08:41.901 sys 0m0.563s 00:08:41.901 19:28:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.901 ************************************ 00:08:41.901 END TEST event_scheduler 00:08:41.901 ************************************ 00:08:41.901 19:28:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:41.901 19:28:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:41.901 19:28:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:41.901 19:28:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.901 19:28:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.901 19:28:35 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.901 ************************************ 00:08:41.901 START TEST app_repeat 00:08:41.901 ************************************ 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:41.901 Process app_repeat pid: 58269 00:08:41.901 spdk_app_start Round 0 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58269 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58269' 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:41.901 19:28:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58269 /var/tmp/spdk-nbd.sock 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58269 ']' 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.901 19:28:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:42.158 [2024-12-05 19:28:35.400134] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:08:42.159 [2024-12-05 19:28:35.400323] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:08:42.159 [2024-12-05 19:28:35.586270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.418 [2024-12-05 19:28:35.720560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.418 [2024-12-05 19:28:35.720571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.356 19:28:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.356 19:28:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:43.356 19:28:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.356 Malloc0 00:08:43.356 19:28:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.924 Malloc1 00:08:43.924 19:28:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:43.924 19:28:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.184 /dev/nbd0 00:08:44.184 19:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.184 19:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.184 1+0 records in 00:08:44.184 1+0 records out 00:08:44.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035926 s, 11.4 MB/s 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.184 19:28:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.184 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.184 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.184 19:28:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:44.442 /dev/nbd1 00:08:44.442 19:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.443 1+0 records in 00:08:44.443 1+0 records out 00:08:44.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303713 s, 13.5 MB/s 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.443 19:28:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.443 19:28:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:44.753 { 00:08:44.753 "nbd_device": "/dev/nbd0", 00:08:44.753 "bdev_name": "Malloc0" 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "nbd_device": "/dev/nbd1", 00:08:44.753 "bdev_name": "Malloc1" 00:08:44.753 } 00:08:44.753 ]' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:44.753 { 00:08:44.753 "nbd_device": "/dev/nbd0", 00:08:44.753 "bdev_name": "Malloc0" 00:08:44.753 }, 00:08:44.753 { 00:08:44.753 "nbd_device": "/dev/nbd1", 00:08:44.753 "bdev_name": "Malloc1" 00:08:44.753 } 00:08:44.753 ]' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:44.753 /dev/nbd1' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:44.753 /dev/nbd1' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:44.753 256+0 records in 00:08:44.753 256+0 records out 00:08:44.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108175 s, 96.9 MB/s 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.753 19:28:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:45.012 256+0 records in 00:08:45.012 256+0 records out 00:08:45.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297262 s, 35.3 MB/s 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.012 256+0 records in 00:08:45.012 256+0 records out 00:08:45.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301592 s, 34.8 MB/s 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.012 19:28:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.271 19:28:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.530 19:28:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.789 19:28:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.789 19:28:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.789 19:28:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.048 19:28:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.048 19:28:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:46.308 19:28:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:47.686 [2024-12-05 19:28:40.849023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:47.686 [2024-12-05 19:28:40.985382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.686 [2024-12-05 19:28:40.985399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.945 [2024-12-05 19:28:41.183689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:47.945 [2024-12-05 19:28:41.183831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:49.321 spdk_app_start Round 1 00:08:49.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.321 19:28:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.321 19:28:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:49.321 19:28:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58269 /var/tmp/spdk-nbd.sock 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58269 ']' 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.321 19:28:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:49.890 19:28:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.890 19:28:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:49.890 19:28:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.149 Malloc0 00:08:50.149 19:28:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.408 Malloc1 00:08:50.408 19:28:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.408 19:28:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:50.667 /dev/nbd0 00:08:50.668 19:28:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.668 19:28:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.668 1+0 records in 00:08:50.668 1+0 records out 00:08:50.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601088 s, 6.8 MB/s 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.668 19:28:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:50.668 19:28:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.668 19:28:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.668 19:28:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:50.927 /dev/nbd1 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.187 1+0 records in 00:08:51.187 1+0 records out 00:08:51.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414601 s, 9.9 MB/s 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.187 19:28:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.187 19:28:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.446 { 00:08:51.446 "nbd_device": "/dev/nbd0", 00:08:51.446 "bdev_name": "Malloc0" 00:08:51.446 }, 00:08:51.446 { 00:08:51.446 "nbd_device": "/dev/nbd1", 00:08:51.446 "bdev_name": "Malloc1" 00:08:51.446 } 00:08:51.446 ]' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.446 { 00:08:51.446 "nbd_device": "/dev/nbd0", 00:08:51.446 "bdev_name": "Malloc0" 00:08:51.446 }, 00:08:51.446 { 00:08:51.446 "nbd_device": "/dev/nbd1", 00:08:51.446 "bdev_name": "Malloc1" 00:08:51.446 } 00:08:51.446 ]' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.446 /dev/nbd1' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.446 /dev/nbd1' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:51.446 256+0 records in 00:08:51.446 256+0 records out 00:08:51.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636607 s, 165 MB/s 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.446 256+0 records in 00:08:51.446 256+0 records out 00:08:51.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312879 s, 33.5 MB/s 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.446 19:28:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.446 256+0 records in 00:08:51.446 256+0 records out 00:08:51.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310293 s, 33.8 MB/s 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.447 19:28:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.013 19:28:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.271 19:28:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:52.530 19:28:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:52.530 19:28:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.096 19:28:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:54.500 [2024-12-05 19:28:47.520952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.500 [2024-12-05 19:28:47.664849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.500 [2024-12-05 19:28:47.664887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.500 [2024-12-05 19:28:47.864619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:54.500 [2024-12-05 19:28:47.864763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:56.401 19:28:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:56.401 spdk_app_start Round 2 00:08:56.401 19:28:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:56.401 19:28:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58269 /var/tmp/spdk-nbd.sock 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58269 ']' 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.401 19:28:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:56.401 19:28:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.659 Malloc0 00:08:56.917 19:28:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.175 Malloc1 00:08:57.175 19:28:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.175 19:28:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.176 19:28:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.433 /dev/nbd0 00:08:57.433 19:28:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.433 19:28:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.434 1+0 records in 00:08:57.434 1+0 records out 00:08:57.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261761 s, 15.6 MB/s 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.434 19:28:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.434 19:28:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.434 19:28:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.434 19:28:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:57.691 /dev/nbd1 00:08:57.691 19:28:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:57.691 19:28:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:57.691 19:28:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.692 1+0 records in 00:08:57.692 1+0 records out 00:08:57.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377453 s, 10.9 MB/s 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.692 19:28:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.692 19:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.692 19:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.692 19:28:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.692 19:28:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.692 19:28:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.949 19:28:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:57.949 { 00:08:57.949 "nbd_device": "/dev/nbd0", 00:08:57.949 "bdev_name": "Malloc0" 00:08:57.949 }, 00:08:57.949 { 00:08:57.949 "nbd_device": "/dev/nbd1", 00:08:57.949 "bdev_name": "Malloc1" 00:08:57.949 } 00:08:57.949 ]' 00:08:57.949 19:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:57.949 { 00:08:57.949 "nbd_device": "/dev/nbd0", 00:08:57.949 "bdev_name": "Malloc0" 00:08:57.949 }, 00:08:57.949 { 00:08:57.949 "nbd_device": "/dev/nbd1", 00:08:57.949 "bdev_name": "Malloc1" 00:08:57.949 } 00:08:57.949 ]' 00:08:57.949 19:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.208 /dev/nbd1' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.208 /dev/nbd1' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.208 256+0 records in 00:08:58.208 256+0 records out 00:08:58.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105938 s, 99.0 MB/s 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.208 256+0 records in 00:08:58.208 256+0 records out 00:08:58.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316457 s, 33.1 MB/s 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.208 256+0 records in 00:08:58.208 256+0 records out 00:08:58.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036369 s, 28.8 MB/s 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.208 19:28:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.466 19:28:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.724 19:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.983 19:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:58.983 19:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:58.983 19:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.241 19:28:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.241 19:28:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.500 19:28:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:00.876 [2024-12-05 19:28:53.985100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:00.876 [2024-12-05 19:28:54.111190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.876 [2024-12-05 19:28:54.111194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.876 [2024-12-05 19:28:54.304198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:00.876 [2024-12-05 19:28:54.304346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.811 19:28:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58269 /var/tmp/spdk-nbd.sock 00:09:02.811 19:28:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58269 ']' 00:09:02.811 19:28:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.811 19:28:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.812 19:28:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.812 19:28:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.812 19:28:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:02.812 19:28:56 event.app_repeat -- event/event.sh@39 -- # killprocess 58269 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58269 ']' 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58269 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58269 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.812 killing process with pid 58269 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58269' 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58269 00:09:02.812 19:28:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58269 00:09:04.190 spdk_app_start is called in Round 0. 00:09:04.190 Shutdown signal received, stop current app iteration 00:09:04.190 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:09:04.190 spdk_app_start is called in Round 1. 00:09:04.190 Shutdown signal received, stop current app iteration 00:09:04.190 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:09:04.190 spdk_app_start is called in Round 2. 00:09:04.190 Shutdown signal received, stop current app iteration 00:09:04.190 Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 reinitialization... 00:09:04.190 spdk_app_start is called in Round 3. 00:09:04.190 Shutdown signal received, stop current app iteration 00:09:04.190 19:28:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:04.190 19:28:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:04.190 00:09:04.190 real 0m21.890s 00:09:04.190 user 0m48.540s 00:09:04.190 sys 0m3.055s 00:09:04.190 19:28:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.190 ************************************ 00:09:04.190 END TEST app_repeat 00:09:04.190 ************************************ 00:09:04.190 19:28:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:04.190 19:28:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:04.190 19:28:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:04.190 19:28:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.190 19:28:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.190 19:28:57 event -- common/autotest_common.sh@10 -- # set +x 00:09:04.190 ************************************ 00:09:04.190 START TEST cpu_locks 00:09:04.190 ************************************ 00:09:04.190 19:28:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:04.190 * Looking for test storage... 00:09:04.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:04.190 19:28:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.190 19:28:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.190 19:28:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.190 19:28:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.190 19:28:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.191 19:28:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.191 --rc genhtml_branch_coverage=1 00:09:04.191 --rc genhtml_function_coverage=1 00:09:04.191 --rc genhtml_legend=1 00:09:04.191 --rc geninfo_all_blocks=1 00:09:04.191 --rc geninfo_unexecuted_blocks=1 00:09:04.191 00:09:04.191 ' 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.191 --rc genhtml_branch_coverage=1 00:09:04.191 --rc genhtml_function_coverage=1 00:09:04.191 --rc genhtml_legend=1 00:09:04.191 --rc geninfo_all_blocks=1 00:09:04.191 --rc geninfo_unexecuted_blocks=1 00:09:04.191 00:09:04.191 ' 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.191 --rc genhtml_branch_coverage=1 00:09:04.191 --rc genhtml_function_coverage=1 00:09:04.191 --rc genhtml_legend=1 00:09:04.191 --rc geninfo_all_blocks=1 00:09:04.191 --rc geninfo_unexecuted_blocks=1 00:09:04.191 00:09:04.191 ' 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.191 --rc genhtml_branch_coverage=1 00:09:04.191 --rc genhtml_function_coverage=1 00:09:04.191 --rc genhtml_legend=1 00:09:04.191 --rc geninfo_all_blocks=1 00:09:04.191 --rc geninfo_unexecuted_blocks=1 00:09:04.191 00:09:04.191 ' 00:09:04.191 19:28:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:04.191 19:28:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:04.191 19:28:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:04.191 19:28:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.191 19:28:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.191 ************************************ 00:09:04.191 START TEST default_locks 00:09:04.191 ************************************ 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58744 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58744 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58744 ']' 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.191 19:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.191 [2024-12-05 19:28:57.603423] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:04.191 [2024-12-05 19:28:57.603594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58744 ] 00:09:04.450 [2024-12-05 19:28:57.778844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.709 [2024-12-05 19:28:57.907605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.646 19:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.646 19:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:05.646 19:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58744 00:09:05.646 19:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58744 00:09:05.646 19:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58744 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58744 ']' 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58744 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58744 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.905 killing process with pid 58744 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58744' 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58744 00:09:05.905 19:28:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58744 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58744 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58744 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58744 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58744 ']' 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58744) - No such process 00:09:08.459 ERROR: process (pid: 58744) is no longer running 00:09:08.459 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:08.460 00:09:08.460 real 0m4.037s 00:09:08.460 user 0m4.083s 00:09:08.460 sys 0m0.725s 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.460 ************************************ 00:09:08.460 END TEST default_locks 00:09:08.460 19:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.460 ************************************ 00:09:08.460 19:29:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:08.460 19:29:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.460 19:29:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.460 19:29:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.460 ************************************ 00:09:08.460 START TEST default_locks_via_rpc 00:09:08.460 ************************************ 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58821 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58821 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58821 ']' 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.460 19:29:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.460 [2024-12-05 19:29:01.710349] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:08.460 [2024-12-05 19:29:01.710557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58821 ] 00:09:08.460 [2024-12-05 19:29:01.891447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.718 [2024-12-05 19:29:02.034771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58821 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58821 00:09:09.672 19:29:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58821 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58821 ']' 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58821 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58821 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.036 killing process with pid 58821 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58821' 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58821 00:09:10.036 19:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58821 00:09:12.603 00:09:12.603 real 0m4.116s 00:09:12.603 user 0m4.005s 00:09:12.603 sys 0m0.772s 00:09:12.603 19:29:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.603 19:29:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.603 ************************************ 00:09:12.603 END TEST default_locks_via_rpc 00:09:12.603 ************************************ 00:09:12.603 19:29:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:12.603 19:29:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.603 19:29:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.603 19:29:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.603 ************************************ 00:09:12.603 START TEST non_locking_app_on_locked_coremask 00:09:12.603 ************************************ 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58895 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58895 /var/tmp/spdk.sock 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58895 ']' 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.603 19:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:12.603 [2024-12-05 19:29:05.873622] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:12.603 [2024-12-05 19:29:05.874382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:09:12.861 [2024-12-05 19:29:06.060518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.861 [2024-12-05 19:29:06.191939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58911 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58911 /var/tmp/spdk2.sock 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58911 ']' 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:13.796 19:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.796 [2024-12-05 19:29:07.183814] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:13.796 [2024-12-05 19:29:07.184018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:09:14.054 [2024-12-05 19:29:07.387899] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:14.054 [2024-12-05 19:29:07.387980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.312 [2024-12-05 19:29:07.645451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.840 19:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.840 19:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:16.840 19:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58895 00:09:16.840 19:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58895 00:09:16.840 19:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58895 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58895 ']' 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58895 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58895 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.408 killing process with pid 58895 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58895' 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58895 00:09:17.408 19:29:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58895 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58911 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58911 ']' 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58911 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58911 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.695 killing process with pid 58911 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58911' 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58911 00:09:22.695 19:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58911 00:09:24.074 00:09:24.074 real 0m11.644s 00:09:24.074 user 0m12.207s 00:09:24.074 sys 0m1.441s 00:09:24.074 19:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.074 19:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.074 ************************************ 00:09:24.074 END TEST non_locking_app_on_locked_coremask 00:09:24.074 ************************************ 00:09:24.074 19:29:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:24.074 19:29:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.074 19:29:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.074 19:29:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.074 ************************************ 00:09:24.074 START TEST locking_app_on_unlocked_coremask 00:09:24.074 ************************************ 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59066 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59066 /var/tmp/spdk.sock 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59066 ']' 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.074 19:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.334 [2024-12-05 19:29:17.574388] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:24.334 [2024-12-05 19:29:17.574585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:09:24.334 [2024-12-05 19:29:17.761191] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:24.334 [2024-12-05 19:29:17.761253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.593 [2024-12-05 19:29:17.901441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59088 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59088 /var/tmp/spdk2.sock 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59088 ']' 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.578 19:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.578 [2024-12-05 19:29:18.973678] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:25.578 [2024-12-05 19:29:18.973889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:09:25.837 [2024-12-05 19:29:19.178386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.096 [2024-12-05 19:29:19.464344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.628 19:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.628 19:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:28.628 19:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59088 00:09:28.628 19:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59088 00:09:28.628 19:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59066 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59066 ']' 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59066 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59066 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.194 killing process with pid 59066 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59066' 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59066 00:09:29.194 19:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59066 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59088 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59088 ']' 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59088 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59088 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.471 killing process with pid 59088 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59088' 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59088 00:09:34.471 19:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59088 00:09:36.386 00:09:36.386 real 0m11.866s 00:09:36.386 user 0m12.401s 00:09:36.386 sys 0m1.598s 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.386 ************************************ 00:09:36.386 END TEST locking_app_on_unlocked_coremask 00:09:36.386 ************************************ 00:09:36.386 19:29:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:36.386 19:29:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.386 19:29:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.386 19:29:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.386 ************************************ 00:09:36.386 START TEST locking_app_on_locked_coremask 00:09:36.386 ************************************ 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59237 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59237 /var/tmp/spdk.sock 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59237 ']' 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:36.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.386 19:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.386 [2024-12-05 19:29:29.501357] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:36.386 [2024-12-05 19:29:29.501566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:09:36.386 [2024-12-05 19:29:29.691827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.644 [2024-12-05 19:29:29.857895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59257 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59257 /var/tmp/spdk2.sock 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59257 /var/tmp/spdk2.sock 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59257 /var/tmp/spdk2.sock 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59257 ']' 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.629 19:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.629 [2024-12-05 19:29:30.872617] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:37.629 [2024-12-05 19:29:30.872849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:09:37.888 [2024-12-05 19:29:31.075745] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59237 has claimed it. 00:09:37.888 [2024-12-05 19:29:31.075848] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:38.149 ERROR: process (pid: 59257) is no longer running 00:09:38.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59257) - No such process 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59237 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59237 00:09:38.149 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59237 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59237 ']' 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59237 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59237 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.716 killing process with pid 59237 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59237' 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59237 00:09:38.716 19:29:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59237 00:09:41.252 00:09:41.252 real 0m4.744s 00:09:41.252 user 0m5.033s 00:09:41.252 sys 0m0.883s 00:09:41.252 19:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.252 ************************************ 00:09:41.252 END TEST locking_app_on_locked_coremask 00:09:41.252 ************************************ 00:09:41.252 19:29:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.252 19:29:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:41.252 19:29:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.252 19:29:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.252 19:29:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:41.252 ************************************ 00:09:41.252 START TEST locking_overlapped_coremask 00:09:41.252 ************************************ 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59327 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59327 /var/tmp/spdk.sock 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59327 ']' 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.252 19:29:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.252 [2024-12-05 19:29:34.304542] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:41.252 [2024-12-05 19:29:34.304740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:09:41.252 [2024-12-05 19:29:34.481406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.252 [2024-12-05 19:29:34.615592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.252 [2024-12-05 19:29:34.615711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.252 [2024-12-05 19:29:34.615734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59345 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59345 /var/tmp/spdk2.sock 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59345 /var/tmp/spdk2.sock 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59345 /var/tmp/spdk2.sock 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59345 ']' 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.193 19:29:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.193 [2024-12-05 19:29:35.623409] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:42.193 [2024-12-05 19:29:35.623865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59345 ] 00:09:42.451 [2024-12-05 19:29:35.828584] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59327 has claimed it. 00:09:42.451 [2024-12-05 19:29:35.828671] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:43.019 ERROR: process (pid: 59345) is no longer running 00:09:43.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59345) - No such process 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59327 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59327 ']' 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59327 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59327 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59327' 00:09:43.019 killing process with pid 59327 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59327 00:09:43.019 19:29:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59327 00:09:45.552 00:09:45.552 real 0m4.401s 00:09:45.552 user 0m11.973s 00:09:45.552 sys 0m0.687s 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.552 ************************************ 00:09:45.552 END TEST locking_overlapped_coremask 00:09:45.552 ************************************ 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.552 19:29:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:45.552 19:29:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.552 19:29:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.552 19:29:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:45.552 ************************************ 00:09:45.552 START TEST locking_overlapped_coremask_via_rpc 00:09:45.552 ************************************ 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:45.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59409 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59409 /var/tmp/spdk.sock 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59409 ']' 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.552 19:29:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.552 [2024-12-05 19:29:38.769865] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:45.552 [2024-12-05 19:29:38.770030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:09:45.552 [2024-12-05 19:29:38.942576] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:45.552 [2024-12-05 19:29:38.942636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.811 [2024-12-05 19:29:39.073315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.812 [2024-12-05 19:29:39.073481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.812 [2024-12-05 19:29:39.073491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59432 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59432 /var/tmp/spdk2.sock 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59432 ']' 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.747 19:29:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.747 [2024-12-05 19:29:40.051757] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:46.747 [2024-12-05 19:29:40.051922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59432 ] 00:09:47.006 [2024-12-05 19:29:40.246900] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:47.006 [2024-12-05 19:29:40.246977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.265 [2024-12-05 19:29:40.520101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.265 [2024-12-05 19:29:40.523852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.265 [2024-12-05 19:29:40.523873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.807 [2024-12-05 19:29:42.826903] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59409 has claimed it. 00:09:49.807 request: 00:09:49.807 { 00:09:49.807 "method": "framework_enable_cpumask_locks", 00:09:49.807 "req_id": 1 00:09:49.807 } 00:09:49.807 Got JSON-RPC error response 00:09:49.807 response: 00:09:49.807 { 00:09:49.807 "code": -32603, 00:09:49.807 "message": "Failed to claim CPU core: 2" 00:09:49.807 } 00:09:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59409 /var/tmp/spdk.sock 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59409 ']' 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.807 19:29:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59432 /var/tmp/spdk2.sock 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59432 ']' 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.807 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:50.067 ************************************ 00:09:50.067 END TEST locking_overlapped_coremask_via_rpc 00:09:50.067 ************************************ 00:09:50.067 00:09:50.067 real 0m4.778s 00:09:50.067 user 0m1.712s 00:09:50.067 sys 0m0.215s 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.067 19:29:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.067 19:29:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:50.067 19:29:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59409 ]] 00:09:50.067 19:29:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59409 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59409 ']' 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59409 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59409 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.067 killing process with pid 59409 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59409' 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59409 00:09:50.067 19:29:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59409 00:09:52.599 19:29:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59432 ]] 00:09:52.599 19:29:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59432 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59432 ']' 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59432 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59432 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:52.599 killing process with pid 59432 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59432' 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59432 00:09:52.599 19:29:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59432 00:09:55.209 19:29:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:55.209 Process with pid 59409 is not found 00:09:55.209 Process with pid 59432 is not found 00:09:55.209 19:29:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:55.209 19:29:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59409 ]] 00:09:55.210 19:29:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59409 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59409 ']' 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59409 00:09:55.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59409) - No such process 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59409 is not found' 00:09:55.210 19:29:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59432 ]] 00:09:55.210 19:29:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59432 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59432 ']' 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59432 00:09:55.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59432) - No such process 00:09:55.210 19:29:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59432 is not found' 00:09:55.210 19:29:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:55.210 00:09:55.210 real 0m50.728s 00:09:55.210 user 1m27.354s 00:09:55.210 sys 0m7.553s 00:09:55.210 19:29:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.210 19:29:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 ************************************ 00:09:55.210 END TEST cpu_locks 00:09:55.210 ************************************ 00:09:55.210 00:09:55.210 real 1m23.796s 00:09:55.210 user 2m33.708s 00:09:55.210 sys 0m11.769s 00:09:55.210 19:29:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.210 19:29:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 ************************************ 00:09:55.210 END TEST event 00:09:55.210 ************************************ 00:09:55.210 19:29:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:55.210 19:29:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.210 19:29:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.210 19:29:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 ************************************ 00:09:55.210 START TEST thread 00:09:55.210 ************************************ 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:55.210 * Looking for test storage... 00:09:55.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.210 19:29:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.210 19:29:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.210 19:29:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.210 19:29:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.210 19:29:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.210 19:29:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.210 19:29:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.210 19:29:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.210 19:29:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.210 19:29:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.210 19:29:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.210 19:29:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:55.210 19:29:48 thread -- scripts/common.sh@345 -- # : 1 00:09:55.210 19:29:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.210 19:29:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.210 19:29:48 thread -- scripts/common.sh@365 -- # decimal 1 00:09:55.210 19:29:48 thread -- scripts/common.sh@353 -- # local d=1 00:09:55.210 19:29:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.210 19:29:48 thread -- scripts/common.sh@355 -- # echo 1 00:09:55.210 19:29:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.210 19:29:48 thread -- scripts/common.sh@366 -- # decimal 2 00:09:55.210 19:29:48 thread -- scripts/common.sh@353 -- # local d=2 00:09:55.210 19:29:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.210 19:29:48 thread -- scripts/common.sh@355 -- # echo 2 00:09:55.210 19:29:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.210 19:29:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.210 19:29:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.210 19:29:48 thread -- scripts/common.sh@368 -- # return 0 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.210 --rc genhtml_branch_coverage=1 00:09:55.210 --rc genhtml_function_coverage=1 00:09:55.210 --rc genhtml_legend=1 00:09:55.210 --rc geninfo_all_blocks=1 00:09:55.210 --rc geninfo_unexecuted_blocks=1 00:09:55.210 00:09:55.210 ' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.210 --rc genhtml_branch_coverage=1 00:09:55.210 --rc genhtml_function_coverage=1 00:09:55.210 --rc genhtml_legend=1 00:09:55.210 --rc geninfo_all_blocks=1 00:09:55.210 --rc geninfo_unexecuted_blocks=1 00:09:55.210 00:09:55.210 ' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.210 --rc genhtml_branch_coverage=1 00:09:55.210 --rc genhtml_function_coverage=1 00:09:55.210 --rc genhtml_legend=1 00:09:55.210 --rc geninfo_all_blocks=1 00:09:55.210 --rc geninfo_unexecuted_blocks=1 00:09:55.210 00:09:55.210 ' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.210 --rc genhtml_branch_coverage=1 00:09:55.210 --rc genhtml_function_coverage=1 00:09:55.210 --rc genhtml_legend=1 00:09:55.210 --rc geninfo_all_blocks=1 00:09:55.210 --rc geninfo_unexecuted_blocks=1 00:09:55.210 00:09:55.210 ' 00:09:55.210 19:29:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.210 19:29:48 thread -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 ************************************ 00:09:55.210 START TEST thread_poller_perf 00:09:55.210 ************************************ 00:09:55.210 19:29:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:55.210 [2024-12-05 19:29:48.333546] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:55.210 [2024-12-05 19:29:48.334072] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:09:55.210 [2024-12-05 19:29:48.526194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.470 [2024-12-05 19:29:48.679447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.470 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:56.846 [2024-12-05T19:29:50.287Z] ====================================== 00:09:56.846 [2024-12-05T19:29:50.287Z] busy:2214985066 (cyc) 00:09:56.846 [2024-12-05T19:29:50.287Z] total_run_count: 298000 00:09:56.846 [2024-12-05T19:29:50.287Z] tsc_hz: 2200000000 (cyc) 00:09:56.846 [2024-12-05T19:29:50.287Z] ====================================== 00:09:56.846 [2024-12-05T19:29:50.287Z] poller_cost: 7432 (cyc), 3378 (nsec) 00:09:56.846 00:09:56.846 real 0m1.624s 00:09:56.846 user 0m1.415s 00:09:56.846 sys 0m0.098s 00:09:56.846 19:29:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.846 ************************************ 00:09:56.846 END TEST thread_poller_perf 00:09:56.846 ************************************ 00:09:56.846 19:29:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.846 19:29:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:56.846 19:29:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:56.846 19:29:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.846 19:29:49 thread -- common/autotest_common.sh@10 -- # set +x 00:09:56.846 ************************************ 00:09:56.846 START TEST thread_poller_perf 00:09:56.846 ************************************ 00:09:56.846 19:29:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:56.846 [2024-12-05 19:29:50.013088] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:56.846 [2024-12-05 19:29:50.013434] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:09:56.846 [2024-12-05 19:29:50.196096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.107 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:57.107 [2024-12-05 19:29:50.322361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.481 [2024-12-05T19:29:51.922Z] ====================================== 00:09:58.481 [2024-12-05T19:29:51.922Z] busy:2204118712 (cyc) 00:09:58.481 [2024-12-05T19:29:51.922Z] total_run_count: 3663000 00:09:58.481 [2024-12-05T19:29:51.922Z] tsc_hz: 2200000000 (cyc) 00:09:58.481 [2024-12-05T19:29:51.922Z] ====================================== 00:09:58.481 [2024-12-05T19:29:51.922Z] poller_cost: 601 (cyc), 273 (nsec) 00:09:58.481 00:09:58.481 real 0m1.581s 00:09:58.481 user 0m1.367s 00:09:58.481 sys 0m0.103s 00:09:58.481 19:29:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.481 ************************************ 00:09:58.481 END TEST thread_poller_perf 00:09:58.481 19:29:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:58.481 ************************************ 00:09:58.481 19:29:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:58.481 ************************************ 00:09:58.481 END TEST thread 00:09:58.481 ************************************ 00:09:58.481 00:09:58.481 real 0m3.493s 00:09:58.481 user 0m2.933s 00:09:58.481 sys 0m0.341s 00:09:58.481 19:29:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.481 19:29:51 thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.481 19:29:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:58.481 19:29:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:58.481 19:29:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.481 19:29:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.481 19:29:51 -- common/autotest_common.sh@10 -- # set +x 00:09:58.481 ************************************ 00:09:58.481 START TEST app_cmdline 00:09:58.481 ************************************ 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:58.481 * Looking for test storage... 00:09:58.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.481 19:29:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.481 --rc genhtml_branch_coverage=1 00:09:58.481 --rc genhtml_function_coverage=1 00:09:58.481 --rc genhtml_legend=1 00:09:58.481 --rc geninfo_all_blocks=1 00:09:58.481 --rc geninfo_unexecuted_blocks=1 00:09:58.481 00:09:58.481 ' 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.481 --rc genhtml_branch_coverage=1 00:09:58.481 --rc genhtml_function_coverage=1 00:09:58.481 --rc genhtml_legend=1 00:09:58.481 --rc geninfo_all_blocks=1 00:09:58.481 --rc geninfo_unexecuted_blocks=1 00:09:58.481 00:09:58.481 ' 00:09:58.481 19:29:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.482 --rc genhtml_branch_coverage=1 00:09:58.482 --rc genhtml_function_coverage=1 00:09:58.482 --rc genhtml_legend=1 00:09:58.482 --rc geninfo_all_blocks=1 00:09:58.482 --rc geninfo_unexecuted_blocks=1 00:09:58.482 00:09:58.482 ' 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.482 --rc genhtml_branch_coverage=1 00:09:58.482 --rc genhtml_function_coverage=1 00:09:58.482 --rc genhtml_legend=1 00:09:58.482 --rc geninfo_all_blocks=1 00:09:58.482 --rc geninfo_unexecuted_blocks=1 00:09:58.482 00:09:58.482 ' 00:09:58.482 19:29:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:58.482 19:29:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59753 00:09:58.482 19:29:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:58.482 19:29:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59753 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59753 ']' 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.482 19:29:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:58.482 [2024-12-05 19:29:51.915235] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:09:58.482 [2024-12-05 19:29:51.915660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:09:58.739 [2024-12-05 19:29:52.091076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.998 [2024-12-05 19:29:52.220063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.934 19:29:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.934 19:29:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:59.934 { 00:09:59.934 "version": "SPDK v25.01-pre git sha1 98eca6fa0", 00:09:59.934 "fields": { 00:09:59.934 "major": 25, 00:09:59.934 "minor": 1, 00:09:59.934 "patch": 0, 00:09:59.934 "suffix": "-pre", 00:09:59.934 "commit": "98eca6fa0" 00:09:59.934 } 00:09:59.934 } 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:59.934 19:29:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:59.934 19:29:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.934 19:29:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:59.934 19:29:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.193 19:29:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:00.193 19:29:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:00.193 19:29:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:00.193 19:29:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:00.452 request: 00:10:00.452 { 00:10:00.452 "method": "env_dpdk_get_mem_stats", 00:10:00.452 "req_id": 1 00:10:00.452 } 00:10:00.452 Got JSON-RPC error response 00:10:00.452 response: 00:10:00.452 { 00:10:00.452 "code": -32601, 00:10:00.452 "message": "Method not found" 00:10:00.452 } 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.452 19:29:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59753 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59753 ']' 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59753 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59753 00:10:00.452 killing process with pid 59753 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59753' 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59753 00:10:00.452 19:29:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59753 00:10:03.070 00:10:03.070 real 0m4.391s 00:10:03.070 user 0m4.841s 00:10:03.070 sys 0m0.677s 00:10:03.070 19:29:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.070 ************************************ 00:10:03.070 END TEST app_cmdline 00:10:03.070 ************************************ 00:10:03.070 19:29:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:03.070 19:29:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:03.070 19:29:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.070 19:29:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.070 19:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:03.070 ************************************ 00:10:03.070 START TEST version 00:10:03.070 ************************************ 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:03.070 * Looking for test storage... 00:10:03.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.070 19:29:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.070 19:29:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.070 19:29:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.070 19:29:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.070 19:29:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.070 19:29:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.070 19:29:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.070 19:29:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.070 19:29:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.070 19:29:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.070 19:29:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.070 19:29:56 version -- scripts/common.sh@344 -- # case "$op" in 00:10:03.070 19:29:56 version -- scripts/common.sh@345 -- # : 1 00:10:03.070 19:29:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.070 19:29:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.070 19:29:56 version -- scripts/common.sh@365 -- # decimal 1 00:10:03.070 19:29:56 version -- scripts/common.sh@353 -- # local d=1 00:10:03.070 19:29:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.070 19:29:56 version -- scripts/common.sh@355 -- # echo 1 00:10:03.070 19:29:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.070 19:29:56 version -- scripts/common.sh@366 -- # decimal 2 00:10:03.070 19:29:56 version -- scripts/common.sh@353 -- # local d=2 00:10:03.070 19:29:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.070 19:29:56 version -- scripts/common.sh@355 -- # echo 2 00:10:03.070 19:29:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.070 19:29:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.070 19:29:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.070 19:29:56 version -- scripts/common.sh@368 -- # return 0 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.070 --rc genhtml_branch_coverage=1 00:10:03.070 --rc genhtml_function_coverage=1 00:10:03.070 --rc genhtml_legend=1 00:10:03.070 --rc geninfo_all_blocks=1 00:10:03.070 --rc geninfo_unexecuted_blocks=1 00:10:03.070 00:10:03.070 ' 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.070 --rc genhtml_branch_coverage=1 00:10:03.070 --rc genhtml_function_coverage=1 00:10:03.070 --rc genhtml_legend=1 00:10:03.070 --rc geninfo_all_blocks=1 00:10:03.070 --rc geninfo_unexecuted_blocks=1 00:10:03.070 00:10:03.070 ' 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.070 --rc genhtml_branch_coverage=1 00:10:03.070 --rc genhtml_function_coverage=1 00:10:03.070 --rc genhtml_legend=1 00:10:03.070 --rc geninfo_all_blocks=1 00:10:03.070 --rc geninfo_unexecuted_blocks=1 00:10:03.070 00:10:03.070 ' 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.070 --rc genhtml_branch_coverage=1 00:10:03.070 --rc genhtml_function_coverage=1 00:10:03.070 --rc genhtml_legend=1 00:10:03.070 --rc geninfo_all_blocks=1 00:10:03.070 --rc geninfo_unexecuted_blocks=1 00:10:03.070 00:10:03.070 ' 00:10:03.070 19:29:56 version -- app/version.sh@17 -- # get_header_version major 00:10:03.070 19:29:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # cut -f2 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # tr -d '"' 00:10:03.070 19:29:56 version -- app/version.sh@17 -- # major=25 00:10:03.070 19:29:56 version -- app/version.sh@18 -- # get_header_version minor 00:10:03.070 19:29:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # cut -f2 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # tr -d '"' 00:10:03.070 19:29:56 version -- app/version.sh@18 -- # minor=1 00:10:03.070 19:29:56 version -- app/version.sh@19 -- # get_header_version patch 00:10:03.070 19:29:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # cut -f2 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # tr -d '"' 00:10:03.070 19:29:56 version -- app/version.sh@19 -- # patch=0 00:10:03.070 19:29:56 version -- app/version.sh@20 -- # get_header_version suffix 00:10:03.070 19:29:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # cut -f2 00:10:03.070 19:29:56 version -- app/version.sh@14 -- # tr -d '"' 00:10:03.070 19:29:56 version -- app/version.sh@20 -- # suffix=-pre 00:10:03.070 19:29:56 version -- app/version.sh@22 -- # version=25.1 00:10:03.070 19:29:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:03.070 19:29:56 version -- app/version.sh@28 -- # version=25.1rc0 00:10:03.070 19:29:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:03.070 19:29:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:03.070 19:29:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:03.070 19:29:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:03.070 00:10:03.070 real 0m0.248s 00:10:03.070 user 0m0.159s 00:10:03.070 sys 0m0.127s 00:10:03.070 ************************************ 00:10:03.070 END TEST version 00:10:03.070 ************************************ 00:10:03.070 19:29:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.070 19:29:56 version -- common/autotest_common.sh@10 -- # set +x 00:10:03.070 19:29:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:03.070 19:29:56 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:03.071 19:29:56 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:03.071 19:29:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.071 19:29:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.071 19:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:03.071 ************************************ 00:10:03.071 START TEST bdev_raid 00:10:03.071 ************************************ 00:10:03.071 19:29:56 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:03.071 * Looking for test storage... 00:10:03.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:03.071 19:29:56 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.071 19:29:56 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.071 19:29:56 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.330 19:29:56 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.330 --rc genhtml_branch_coverage=1 00:10:03.330 --rc genhtml_function_coverage=1 00:10:03.330 --rc genhtml_legend=1 00:10:03.330 --rc geninfo_all_blocks=1 00:10:03.330 --rc geninfo_unexecuted_blocks=1 00:10:03.330 00:10:03.330 ' 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.330 --rc genhtml_branch_coverage=1 00:10:03.330 --rc genhtml_function_coverage=1 00:10:03.330 --rc genhtml_legend=1 00:10:03.330 --rc geninfo_all_blocks=1 00:10:03.330 --rc geninfo_unexecuted_blocks=1 00:10:03.330 00:10:03.330 ' 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.330 --rc genhtml_branch_coverage=1 00:10:03.330 --rc genhtml_function_coverage=1 00:10:03.330 --rc genhtml_legend=1 00:10:03.330 --rc geninfo_all_blocks=1 00:10:03.330 --rc geninfo_unexecuted_blocks=1 00:10:03.330 00:10:03.330 ' 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.330 --rc genhtml_branch_coverage=1 00:10:03.330 --rc genhtml_function_coverage=1 00:10:03.330 --rc genhtml_legend=1 00:10:03.330 --rc geninfo_all_blocks=1 00:10:03.330 --rc geninfo_unexecuted_blocks=1 00:10:03.330 00:10:03.330 ' 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:03.330 19:29:56 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:03.330 19:29:56 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.330 19:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.330 ************************************ 00:10:03.330 START TEST raid1_resize_data_offset_test 00:10:03.330 ************************************ 00:10:03.330 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:10:03.330 19:29:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59941 00:10:03.330 19:29:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59941' 00:10:03.330 19:29:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.330 Process raid pid: 59941 00:10:03.330 19:29:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59941 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59941 ']' 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.331 19:29:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.331 [2024-12-05 19:29:56.709486] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:03.331 [2024-12-05 19:29:56.709676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.590 [2024-12-05 19:29:56.905612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.849 [2024-12-05 19:29:57.069888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.849 [2024-12-05 19:29:57.284101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.849 [2024-12-05 19:29:57.284158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.421 malloc0 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.421 malloc1 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.421 null0 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.421 [2024-12-05 19:29:57.822588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:04.421 [2024-12-05 19:29:57.825448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:04.421 [2024-12-05 19:29:57.825529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:04.421 [2024-12-05 19:29:57.825745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:04.421 [2024-12-05 19:29:57.825768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:04.421 [2024-12-05 19:29:57.826141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:04.421 [2024-12-05 19:29:57.826366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:04.421 [2024-12-05 19:29:57.826388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:04.421 [2024-12-05 19:29:57.826639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.421 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.681 [2024-12-05 19:29:57.886674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.681 19:29:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.250 malloc2 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.250 [2024-12-05 19:29:58.467033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:05.250 [2024-12-05 19:29:58.484462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.250 [2024-12-05 19:29:58.487136] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59941 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59941 ']' 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59941 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59941 00:10:05.250 killing process with pid 59941 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59941' 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59941 00:10:05.250 19:29:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59941 00:10:05.250 [2024-12-05 19:29:58.568176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.250 [2024-12-05 19:29:58.568517] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:05.251 [2024-12-05 19:29:58.568748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.251 [2024-12-05 19:29:58.568781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:05.251 [2024-12-05 19:29:58.600648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.251 [2024-12-05 19:29:58.601094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.251 [2024-12-05 19:29:58.601245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:07.177 [2024-12-05 19:30:00.368813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.114 19:30:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:10:08.114 00:10:08.114 real 0m4.919s 00:10:08.114 user 0m4.756s 00:10:08.114 sys 0m0.686s 00:10:08.114 19:30:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.114 19:30:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.114 ************************************ 00:10:08.114 END TEST raid1_resize_data_offset_test 00:10:08.114 ************************************ 00:10:08.372 19:30:01 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:10:08.372 19:30:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.372 19:30:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.372 19:30:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.372 ************************************ 00:10:08.372 START TEST raid0_resize_superblock_test 00:10:08.372 ************************************ 00:10:08.372 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60030 00:10:08.373 Process raid pid: 60030 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60030' 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60030 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60030 ']' 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.373 19:30:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.373 [2024-12-05 19:30:01.660777] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:08.373 [2024-12-05 19:30:01.660936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.630 [2024-12-05 19:30:01.837184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.631 [2024-12-05 19:30:01.974677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.896 [2024-12-05 19:30:02.188552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.896 [2024-12-05 19:30:02.188608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.462 19:30:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.462 19:30:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.462 19:30:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:09.462 19:30:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.462 19:30:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.028 malloc0 00:10:10.028 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.028 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:10.028 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.028 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.028 [2024-12-05 19:30:03.247845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:10.028 [2024-12-05 19:30:03.247945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.028 [2024-12-05 19:30:03.247995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.028 [2024-12-05 19:30:03.248017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.028 [2024-12-05 19:30:03.251097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.028 [2024-12-05 19:30:03.251176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:10.028 pt0 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 98ef7e17-772e-4737-b0fc-3a4139bac6de 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 9c296ab3-1947-4cdb-a5cd-1d93b347797c 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 c1379543-0718-48af-ba19-66664d11bb48 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 [2024-12-05 19:30:03.404742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c296ab3-1947-4cdb-a5cd-1d93b347797c is claimed 00:10:10.029 [2024-12-05 19:30:03.404922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1379543-0718-48af-ba19-66664d11bb48 is claimed 00:10:10.029 [2024-12-05 19:30:03.405137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.029 [2024-12-05 19:30:03.405165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:10.029 [2024-12-05 19:30:03.405610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.029 [2024-12-05 19:30:03.405939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.029 [2024-12-05 19:30:03.405958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:10.029 [2024-12-05 19:30:03.406188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:10.288 [2024-12-05 19:30:03.513053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 [2024-12-05 19:30:03.556975] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:10.288 [2024-12-05 19:30:03.557015] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9c296ab3-1947-4cdb-a5cd-1d93b347797c' was resized: old size 131072, new size 204800 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 [2024-12-05 19:30:03.569046] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:10.288 [2024-12-05 19:30:03.569089] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c1379543-0718-48af-ba19-66664d11bb48' was resized: old size 131072, new size 204800 00:10:10.288 [2024-12-05 19:30:03.569151] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 [2024-12-05 19:30:03.685059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.288 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.548 [2024-12-05 19:30:03.728844] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:10.548 [2024-12-05 19:30:03.728949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:10.548 [2024-12-05 19:30:03.728974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.548 [2024-12-05 19:30:03.728995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:10.548 [2024-12-05 19:30:03.729139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.548 [2024-12-05 19:30:03.729193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.548 [2024-12-05 19:30:03.729214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.548 [2024-12-05 19:30:03.736722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:10.548 [2024-12-05 19:30:03.736797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.548 [2024-12-05 19:30:03.736829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:10.548 [2024-12-05 19:30:03.736846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.548 [2024-12-05 19:30:03.739892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.548 [2024-12-05 19:30:03.740075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:10.548 pt0 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.548 [2024-12-05 19:30:03.742578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9c296ab3-1947-4cdb-a5cd-1d93b347797c 00:10:10.548 [2024-12-05 19:30:03.742681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c296ab3-1947-4cdb-a5cd-1d93b347797c is claimed 00:10:10.548 [2024-12-05 19:30:03.742843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c1379543-0718-48af-ba19-66664d11bb48 00:10:10.548 [2024-12-05 19:30:03.742879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c1379543-0718-48af-ba19-66664d11bb48 is claimed 00:10:10.548 [2024-12-05 19:30:03.743048] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c1379543-0718-48af-ba19-66664d11bb48 (2) smaller than existing raid bdev Raid (3) 00:10:10.548 [2024-12-05 19:30:03.743087] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9c296ab3-1947-4cdb-a5cd-1d93b347797c: File exists 00:10:10.548 [2024-12-05 19:30:03.743146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:10.548 [2024-12-05 19:30:03.743165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:10.548 [2024-12-05 19:30:03.743534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:10.548 [2024-12-05 19:30:03.743930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:10.548 [2024-12-05 19:30:03.743954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:10.548 [2024-12-05 19:30:03.744168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.548 [2024-12-05 19:30:03.757131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60030 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60030 ']' 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60030 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60030 00:10:10.548 killing process with pid 60030 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60030' 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60030 00:10:10.548 [2024-12-05 19:30:03.833886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.548 19:30:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60030 00:10:10.548 [2024-12-05 19:30:03.833995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.548 [2024-12-05 19:30:03.834061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.548 [2024-12-05 19:30:03.834076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:11.998 [2024-12-05 19:30:05.215524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.933 19:30:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:12.933 00:10:12.933 real 0m4.766s 00:10:12.933 user 0m5.060s 00:10:12.933 sys 0m0.652s 00:10:12.933 19:30:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.933 ************************************ 00:10:12.933 19:30:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.933 END TEST raid0_resize_superblock_test 00:10:12.933 ************************************ 00:10:13.192 19:30:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:10:13.192 19:30:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.192 19:30:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.192 19:30:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.192 ************************************ 00:10:13.192 START TEST raid1_resize_superblock_test 00:10:13.192 ************************************ 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:10:13.192 Process raid pid: 60130 00:10:13.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60130 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60130' 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60130 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60130 ']' 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.192 19:30:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.192 [2024-12-05 19:30:06.503766] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:13.192 [2024-12-05 19:30:06.504323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.452 [2024-12-05 19:30:06.694821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.452 [2024-12-05 19:30:06.838484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.711 [2024-12-05 19:30:07.065109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.712 [2024-12-05 19:30:07.065448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.278 19:30:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.278 19:30:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:14.278 19:30:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:14.278 19:30:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.278 19:30:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 malloc0 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 [2024-12-05 19:30:08.077467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:14.845 [2024-12-05 19:30:08.077577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.845 [2024-12-05 19:30:08.077612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:14.845 [2024-12-05 19:30:08.077632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.845 [2024-12-05 19:30:08.081016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.845 [2024-12-05 19:30:08.081085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:14.845 pt0 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 120831d4-1266-408e-a4af-eb8ad3e5e03c 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 fe57678f-2d23-4fc5-9da7-cf7582c3613d 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 752e57e8-e749-493b-a10f-1773a97c7e66 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 [2024-12-05 19:30:08.237875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe57678f-2d23-4fc5-9da7-cf7582c3613d is claimed 00:10:14.846 [2024-12-05 19:30:08.238006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 752e57e8-e749-493b-a10f-1773a97c7e66 is claimed 00:10:14.846 [2024-12-05 19:30:08.238207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:14.846 [2024-12-05 19:30:08.238233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:10:14.846 [2024-12-05 19:30:08.238623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.846 [2024-12-05 19:30:08.238913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:14.846 [2024-12-05 19:30:08.238932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:14.846 [2024-12-05 19:30:08.239162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:10:15.105 [2024-12-05 19:30:08.354202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 [2024-12-05 19:30:08.406317] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:15.105 [2024-12-05 19:30:08.406354] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fe57678f-2d23-4fc5-9da7-cf7582c3613d' was resized: old size 131072, new size 204800 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 [2024-12-05 19:30:08.414051] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:15.105 [2024-12-05 19:30:08.414079] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '752e57e8-e749-493b-a10f-1773a97c7e66' was resized: old size 131072, new size 204800 00:10:15.105 [2024-12-05 19:30:08.414150] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.105 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 [2024-12-05 19:30:08.530313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.364 [2024-12-05 19:30:08.574042] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:15.364 [2024-12-05 19:30:08.574377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:15.364 [2024-12-05 19:30:08.574526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:15.364 [2024-12-05 19:30:08.574873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.364 [2024-12-05 19:30:08.575317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.364 [2024-12-05 19:30:08.575617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.364 [2024-12-05 19:30:08.575781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.364 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.364 [2024-12-05 19:30:08.581889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:15.364 [2024-12-05 19:30:08.581960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.364 [2024-12-05 19:30:08.581989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:15.364 [2024-12-05 19:30:08.582009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.364 [2024-12-05 19:30:08.585241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.364 [2024-12-05 19:30:08.585298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:15.364 pt0 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.365 [2024-12-05 19:30:08.587770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fe57678f-2d23-4fc5-9da7-cf7582c3613d 00:10:15.365 [2024-12-05 19:30:08.587860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe57678f-2d23-4fc5-9da7-cf7582c3613d is claimed 00:10:15.365 [2024-12-05 19:30:08.588003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 752e57e8-e749-493b-a10f-1773a97c7e66 00:10:15.365 [2024-12-05 19:30:08.588037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 752e57e8-e749-493b-a10f-1773a97c7e66 is claimed 00:10:15.365 [2024-12-05 19:30:08.588191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 752e57e8-e749-493b-a10f-1773a97c7e66 (2) smaller than existing raid bdev Raid (3) 00:10:15.365 [2024-12-05 19:30:08.588224] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fe57678f-2d23-4fc5-9da7-cf7582c3613d: File exists 00:10:15.365 [2024-12-05 19:30:08.588284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:15.365 [2024-12-05 19:30:08.588304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.365 [2024-12-05 19:30:08.588624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:15.365 [2024-12-05 19:30:08.588861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:15.365 [2024-12-05 19:30:08.588877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:15.365 [2024-12-05 19:30:08.589061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:10:15.365 [2024-12-05 19:30:08.606254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60130 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60130 ']' 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60130 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60130 00:10:15.365 killing process with pid 60130 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60130' 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60130 00:10:15.365 [2024-12-05 19:30:08.692168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.365 19:30:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60130 00:10:15.365 [2024-12-05 19:30:08.692291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.365 [2024-12-05 19:30:08.692363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.365 [2024-12-05 19:30:08.692377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:16.744 [2024-12-05 19:30:10.077065] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.788 ************************************ 00:10:17.788 END TEST raid1_resize_superblock_test 00:10:17.788 ************************************ 00:10:17.788 19:30:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:17.788 00:10:17.788 real 0m4.816s 00:10:17.788 user 0m5.050s 00:10:17.788 sys 0m0.729s 00:10:17.788 19:30:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.788 19:30:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:10:18.048 19:30:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:18.048 19:30:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.048 19:30:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.048 19:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.048 ************************************ 00:10:18.048 START TEST raid_function_test_raid0 00:10:18.048 ************************************ 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60233 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60233' 00:10:18.048 Process raid pid: 60233 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60233 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60233 ']' 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.048 19:30:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:18.048 [2024-12-05 19:30:11.381315] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:18.048 [2024-12-05 19:30:11.381482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.305 [2024-12-05 19:30:11.561158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.305 [2024-12-05 19:30:11.704208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.563 [2024-12-05 19:30:11.917289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.563 [2024-12-05 19:30:11.917341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 Base_1 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 Base_2 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 [2024-12-05 19:30:12.503992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:19.129 [2024-12-05 19:30:12.506684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:19.129 [2024-12-05 19:30:12.506812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.129 [2024-12-05 19:30:12.506854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:19.129 [2024-12-05 19:30:12.507209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:19.129 [2024-12-05 19:30:12.507561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.129 [2024-12-05 19:30:12.507587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:19.129 [2024-12-05 19:30:12.507890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:19.129 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:19.388 [2024-12-05 19:30:12.816185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:19.647 /dev/nbd0 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:19.647 1+0 records in 00:10:19.647 1+0 records out 00:10:19.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413308 s, 9.9 MB/s 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:19.647 19:30:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:19.906 { 00:10:19.906 "nbd_device": "/dev/nbd0", 00:10:19.906 "bdev_name": "raid" 00:10:19.906 } 00:10:19.906 ]' 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:19.906 { 00:10:19.906 "nbd_device": "/dev/nbd0", 00:10:19.906 "bdev_name": "raid" 00:10:19.906 } 00:10:19.906 ]' 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:19.906 4096+0 records in 00:10:19.906 4096+0 records out 00:10:19.906 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0309434 s, 67.8 MB/s 00:10:19.906 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:20.165 4096+0 records in 00:10:20.165 4096+0 records out 00:10:20.165 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.325489 s, 6.4 MB/s 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:20.165 128+0 records in 00:10:20.165 128+0 records out 00:10:20.165 65536 bytes (66 kB, 64 KiB) copied, 0.000646353 s, 101 MB/s 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:20.165 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:20.424 2035+0 records in 00:10:20.424 2035+0 records out 00:10:20.424 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0111606 s, 93.4 MB/s 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:20.424 456+0 records in 00:10:20.424 456+0 records out 00:10:20.424 233472 bytes (233 kB, 228 KiB) copied, 0.00264533 s, 88.3 MB/s 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.424 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:20.682 [2024-12-05 19:30:13.923878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:20.682 19:30:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60233 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60233 ']' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60233 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60233 00:10:20.940 killing process with pid 60233 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60233' 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60233 00:10:20.940 [2024-12-05 19:30:14.363014] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.940 19:30:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60233 00:10:20.940 [2024-12-05 19:30:14.363138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.940 [2024-12-05 19:30:14.363205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.940 [2024-12-05 19:30:14.363229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:21.198 [2024-12-05 19:30:14.558795] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.613 19:30:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:22.613 00:10:22.613 real 0m4.380s 00:10:22.613 user 0m5.359s 00:10:22.613 sys 0m1.006s 00:10:22.613 19:30:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.613 ************************************ 00:10:22.613 19:30:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 END TEST raid_function_test_raid0 00:10:22.613 ************************************ 00:10:22.613 19:30:15 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:22.613 19:30:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.613 19:30:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.613 19:30:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 ************************************ 00:10:22.613 START TEST raid_function_test_concat 00:10:22.613 ************************************ 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:22.613 Process raid pid: 60366 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60366 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60366' 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60366 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.613 19:30:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 [2024-12-05 19:30:15.829231] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:22.613 [2024-12-05 19:30:15.829418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.613 [2024-12-05 19:30:16.020548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.872 [2024-12-05 19:30:16.185096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.130 [2024-12-05 19:30:16.419577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.130 [2024-12-05 19:30:16.419638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.697 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.697 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:23.698 Base_1 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:23.698 Base_2 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:23.698 [2024-12-05 19:30:16.945024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:23.698 [2024-12-05 19:30:16.947481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:23.698 [2024-12-05 19:30:16.947586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:23.698 [2024-12-05 19:30:16.947607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:23.698 [2024-12-05 19:30:16.947963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:23.698 [2024-12-05 19:30:16.948157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:23.698 [2024-12-05 19:30:16.948224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:23.698 [2024-12-05 19:30:16.948424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:23.698 19:30:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:23.698 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:23.957 [2024-12-05 19:30:17.293225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:23.957 /dev/nbd0 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.957 1+0 records in 00:10:23.957 1+0 records out 00:10:23.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293422 s, 14.0 MB/s 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:23.957 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:24.232 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:24.232 { 00:10:24.232 "nbd_device": "/dev/nbd0", 00:10:24.232 "bdev_name": "raid" 00:10:24.232 } 00:10:24.232 ]' 00:10:24.232 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:24.232 { 00:10:24.232 "nbd_device": "/dev/nbd0", 00:10:24.232 "bdev_name": "raid" 00:10:24.232 } 00:10:24.232 ]' 00:10:24.232 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:24.491 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:24.492 4096+0 records in 00:10:24.492 4096+0 records out 00:10:24.492 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292691 s, 71.7 MB/s 00:10:24.492 19:30:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:24.750 4096+0 records in 00:10:24.750 4096+0 records out 00:10:24.750 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.339002 s, 6.2 MB/s 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:24.750 128+0 records in 00:10:24.750 128+0 records out 00:10:24.750 65536 bytes (66 kB, 64 KiB) copied, 0.00152827 s, 42.9 MB/s 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:24.750 2035+0 records in 00:10:24.750 2035+0 records out 00:10:24.750 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00890075 s, 117 MB/s 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:24.750 456+0 records in 00:10:24.750 456+0 records out 00:10:24.750 233472 bytes (233 kB, 228 KiB) copied, 0.00151038 s, 155 MB/s 00:10:24.750 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.008 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:25.267 [2024-12-05 19:30:18.557970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:25.267 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:25.525 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:25.525 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:25.525 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.784 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:25.784 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:25.784 19:30:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60366 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60366 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60366 00:10:25.784 killing process with pid 60366 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60366' 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60366 00:10:25.784 [2024-12-05 19:30:19.038420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.784 19:30:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60366 00:10:25.784 [2024-12-05 19:30:19.038546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.784 [2024-12-05 19:30:19.038617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.784 [2024-12-05 19:30:19.038637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:26.043 [2024-12-05 19:30:19.234930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.974 19:30:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:26.974 00:10:26.974 real 0m4.596s 00:10:26.974 user 0m5.762s 00:10:26.974 sys 0m1.035s 00:10:26.974 19:30:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.974 19:30:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:26.974 ************************************ 00:10:26.974 END TEST raid_function_test_concat 00:10:26.974 ************************************ 00:10:26.974 19:30:20 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:26.974 19:30:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.974 19:30:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.974 19:30:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.974 ************************************ 00:10:26.974 START TEST raid0_resize_test 00:10:26.974 ************************************ 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60502 00:10:26.974 Process raid pid: 60502 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60502' 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60502 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60502 ']' 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.974 19:30:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 [2024-12-05 19:30:20.468596] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:27.232 [2024-12-05 19:30:20.468797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.232 [2024-12-05 19:30:20.658664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.490 [2024-12-05 19:30:20.817221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.750 [2024-12-05 19:30:21.026938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.750 [2024-12-05 19:30:21.026979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.008 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.008 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.008 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:28.008 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.008 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.267 Base_1 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.267 Base_2 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.267 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.267 [2024-12-05 19:30:21.461628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:28.267 [2024-12-05 19:30:21.464001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:28.267 [2024-12-05 19:30:21.464076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.267 [2024-12-05 19:30:21.464096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:28.267 [2024-12-05 19:30:21.464401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:28.268 [2024-12-05 19:30:21.464562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.268 [2024-12-05 19:30:21.464577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:28.268 [2024-12-05 19:30:21.464761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.268 [2024-12-05 19:30:21.469618] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:28.268 [2024-12-05 19:30:21.469656] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:28.268 true 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.268 [2024-12-05 19:30:21.481831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.268 [2024-12-05 19:30:21.533612] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:28.268 [2024-12-05 19:30:21.533644] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:28.268 [2024-12-05 19:30:21.533682] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:28.268 true 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.268 [2024-12-05 19:30:21.545825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60502 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60502 ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60502 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60502 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.268 killing process with pid 60502 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60502' 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60502 00:10:28.268 [2024-12-05 19:30:21.620748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.268 19:30:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60502 00:10:28.268 [2024-12-05 19:30:21.620832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.268 [2024-12-05 19:30:21.620895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.268 [2024-12-05 19:30:21.620911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:28.268 [2024-12-05 19:30:21.636204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.646 19:30:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:29.646 00:10:29.646 real 0m2.318s 00:10:29.646 user 0m2.570s 00:10:29.646 sys 0m0.369s 00:10:29.646 19:30:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.646 19:30:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.646 ************************************ 00:10:29.646 END TEST raid0_resize_test 00:10:29.646 ************************************ 00:10:29.646 19:30:22 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:29.646 19:30:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.646 19:30:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.646 19:30:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.646 ************************************ 00:10:29.646 START TEST raid1_resize_test 00:10:29.646 ************************************ 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60558 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60558' 00:10:29.646 Process raid pid: 60558 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60558 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60558 ']' 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.646 19:30:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.646 [2024-12-05 19:30:22.841504] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:29.646 [2024-12-05 19:30:22.841694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.646 [2024-12-05 19:30:23.032606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.904 [2024-12-05 19:30:23.193627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.163 [2024-12-05 19:30:23.417472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.163 [2024-12-05 19:30:23.417529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 Base_1 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 Base_2 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 [2024-12-05 19:30:23.854818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:30.423 [2024-12-05 19:30:23.857321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:30.423 [2024-12-05 19:30:23.857408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:30.423 [2024-12-05 19:30:23.857430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:30.423 [2024-12-05 19:30:23.857782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:30.423 [2024-12-05 19:30:23.857976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:30.423 [2024-12-05 19:30:23.857993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:30.423 [2024-12-05 19:30:23.858181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.423 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 [2024-12-05 19:30:23.862796] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:30.423 [2024-12-05 19:30:23.862838] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:30.682 true 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:30.682 [2024-12-05 19:30:23.875183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 [2024-12-05 19:30:23.922826] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:30.682 [2024-12-05 19:30:23.922881] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:30.682 [2024-12-05 19:30:23.922924] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:30.682 true 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 [2024-12-05 19:30:23.935004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60558 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60558 ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60558 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.682 19:30:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60558 00:10:30.682 19:30:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.682 19:30:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.682 killing process with pid 60558 00:10:30.682 19:30:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60558' 00:10:30.682 19:30:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60558 00:10:30.682 [2024-12-05 19:30:24.008032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.682 [2024-12-05 19:30:24.008152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.682 19:30:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60558 00:10:30.682 [2024-12-05 19:30:24.008781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.682 [2024-12-05 19:30:24.008814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:30.682 [2024-12-05 19:30:24.024192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.060 19:30:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:32.060 00:10:32.060 real 0m2.356s 00:10:32.060 user 0m2.594s 00:10:32.060 sys 0m0.400s 00:10:32.060 19:30:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.060 19:30:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.060 ************************************ 00:10:32.060 END TEST raid1_resize_test 00:10:32.060 ************************************ 00:10:32.060 19:30:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:32.060 19:30:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.060 19:30:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:32.060 19:30:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.060 19:30:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.060 19:30:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.060 ************************************ 00:10:32.060 START TEST raid_state_function_test 00:10:32.060 ************************************ 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60621 00:10:32.060 Process raid pid: 60621 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60621' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60621 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60621 ']' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.060 19:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.060 [2024-12-05 19:30:25.242147] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:32.060 [2024-12-05 19:30:25.242314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.061 [2024-12-05 19:30:25.417555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.318 [2024-12-05 19:30:25.553071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.577 [2024-12-05 19:30:25.764695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.577 [2024-12-05 19:30:25.764772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.144 [2024-12-05 19:30:26.288170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.144 [2024-12-05 19:30:26.288240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.144 [2024-12-05 19:30:26.288258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.144 [2024-12-05 19:30:26.288274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.144 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.145 "name": "Existed_Raid", 00:10:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.145 "strip_size_kb": 64, 00:10:33.145 "state": "configuring", 00:10:33.145 "raid_level": "raid0", 00:10:33.145 "superblock": false, 00:10:33.145 "num_base_bdevs": 2, 00:10:33.145 "num_base_bdevs_discovered": 0, 00:10:33.145 "num_base_bdevs_operational": 2, 00:10:33.145 "base_bdevs_list": [ 00:10:33.145 { 00:10:33.145 "name": "BaseBdev1", 00:10:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.145 "is_configured": false, 00:10:33.145 "data_offset": 0, 00:10:33.145 "data_size": 0 00:10:33.145 }, 00:10:33.145 { 00:10:33.145 "name": "BaseBdev2", 00:10:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.145 "is_configured": false, 00:10:33.145 "data_offset": 0, 00:10:33.145 "data_size": 0 00:10:33.145 } 00:10:33.145 ] 00:10:33.145 }' 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.145 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.403 [2024-12-05 19:30:26.792262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.403 [2024-12-05 19:30:26.792311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.403 [2024-12-05 19:30:26.800231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.403 [2024-12-05 19:30:26.800287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.403 [2024-12-05 19:30:26.800303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.403 [2024-12-05 19:30:26.800323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.403 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.661 [2024-12-05 19:30:26.845196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.661 BaseBdev1 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.661 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.662 [ 00:10:33.662 { 00:10:33.662 "name": "BaseBdev1", 00:10:33.662 "aliases": [ 00:10:33.662 "86e5dc57-85d4-40dd-9650-dd26993dbc7e" 00:10:33.662 ], 00:10:33.662 "product_name": "Malloc disk", 00:10:33.662 "block_size": 512, 00:10:33.662 "num_blocks": 65536, 00:10:33.662 "uuid": "86e5dc57-85d4-40dd-9650-dd26993dbc7e", 00:10:33.662 "assigned_rate_limits": { 00:10:33.662 "rw_ios_per_sec": 0, 00:10:33.662 "rw_mbytes_per_sec": 0, 00:10:33.662 "r_mbytes_per_sec": 0, 00:10:33.662 "w_mbytes_per_sec": 0 00:10:33.662 }, 00:10:33.662 "claimed": true, 00:10:33.662 "claim_type": "exclusive_write", 00:10:33.662 "zoned": false, 00:10:33.662 "supported_io_types": { 00:10:33.662 "read": true, 00:10:33.662 "write": true, 00:10:33.662 "unmap": true, 00:10:33.662 "flush": true, 00:10:33.662 "reset": true, 00:10:33.662 "nvme_admin": false, 00:10:33.662 "nvme_io": false, 00:10:33.662 "nvme_io_md": false, 00:10:33.662 "write_zeroes": true, 00:10:33.662 "zcopy": true, 00:10:33.662 "get_zone_info": false, 00:10:33.662 "zone_management": false, 00:10:33.662 "zone_append": false, 00:10:33.662 "compare": false, 00:10:33.662 "compare_and_write": false, 00:10:33.662 "abort": true, 00:10:33.662 "seek_hole": false, 00:10:33.662 "seek_data": false, 00:10:33.662 "copy": true, 00:10:33.662 "nvme_iov_md": false 00:10:33.662 }, 00:10:33.662 "memory_domains": [ 00:10:33.662 { 00:10:33.662 "dma_device_id": "system", 00:10:33.662 "dma_device_type": 1 00:10:33.662 }, 00:10:33.662 { 00:10:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.662 "dma_device_type": 2 00:10:33.662 } 00:10:33.662 ], 00:10:33.662 "driver_specific": {} 00:10:33.662 } 00:10:33.662 ] 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.662 "name": "Existed_Raid", 00:10:33.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.662 "strip_size_kb": 64, 00:10:33.662 "state": "configuring", 00:10:33.662 "raid_level": "raid0", 00:10:33.662 "superblock": false, 00:10:33.662 "num_base_bdevs": 2, 00:10:33.662 "num_base_bdevs_discovered": 1, 00:10:33.662 "num_base_bdevs_operational": 2, 00:10:33.662 "base_bdevs_list": [ 00:10:33.662 { 00:10:33.662 "name": "BaseBdev1", 00:10:33.662 "uuid": "86e5dc57-85d4-40dd-9650-dd26993dbc7e", 00:10:33.662 "is_configured": true, 00:10:33.662 "data_offset": 0, 00:10:33.662 "data_size": 65536 00:10:33.662 }, 00:10:33.662 { 00:10:33.662 "name": "BaseBdev2", 00:10:33.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.662 "is_configured": false, 00:10:33.662 "data_offset": 0, 00:10:33.662 "data_size": 0 00:10:33.662 } 00:10:33.662 ] 00:10:33.662 }' 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.662 19:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.229 [2024-12-05 19:30:27.409430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.229 [2024-12-05 19:30:27.409501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.229 [2024-12-05 19:30:27.417498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.229 [2024-12-05 19:30:27.419981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.229 [2024-12-05 19:30:27.420038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.229 "name": "Existed_Raid", 00:10:34.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.229 "strip_size_kb": 64, 00:10:34.229 "state": "configuring", 00:10:34.229 "raid_level": "raid0", 00:10:34.229 "superblock": false, 00:10:34.229 "num_base_bdevs": 2, 00:10:34.229 "num_base_bdevs_discovered": 1, 00:10:34.229 "num_base_bdevs_operational": 2, 00:10:34.229 "base_bdevs_list": [ 00:10:34.229 { 00:10:34.229 "name": "BaseBdev1", 00:10:34.229 "uuid": "86e5dc57-85d4-40dd-9650-dd26993dbc7e", 00:10:34.229 "is_configured": true, 00:10:34.229 "data_offset": 0, 00:10:34.229 "data_size": 65536 00:10:34.229 }, 00:10:34.229 { 00:10:34.229 "name": "BaseBdev2", 00:10:34.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.229 "is_configured": false, 00:10:34.229 "data_offset": 0, 00:10:34.229 "data_size": 0 00:10:34.229 } 00:10:34.229 ] 00:10:34.229 }' 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.229 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.488 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.488 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.488 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.747 [2024-12-05 19:30:27.953063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.747 [2024-12-05 19:30:27.953125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.747 [2024-12-05 19:30:27.953150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:34.747 [2024-12-05 19:30:27.953483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:34.747 [2024-12-05 19:30:27.953730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.747 [2024-12-05 19:30:27.953762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.747 [2024-12-05 19:30:27.954087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.747 BaseBdev2 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.747 [ 00:10:34.747 { 00:10:34.747 "name": "BaseBdev2", 00:10:34.747 "aliases": [ 00:10:34.747 "dcd92b78-0132-4fc7-ad32-a5e5112d78a0" 00:10:34.747 ], 00:10:34.747 "product_name": "Malloc disk", 00:10:34.747 "block_size": 512, 00:10:34.747 "num_blocks": 65536, 00:10:34.747 "uuid": "dcd92b78-0132-4fc7-ad32-a5e5112d78a0", 00:10:34.747 "assigned_rate_limits": { 00:10:34.747 "rw_ios_per_sec": 0, 00:10:34.747 "rw_mbytes_per_sec": 0, 00:10:34.747 "r_mbytes_per_sec": 0, 00:10:34.747 "w_mbytes_per_sec": 0 00:10:34.747 }, 00:10:34.747 "claimed": true, 00:10:34.747 "claim_type": "exclusive_write", 00:10:34.747 "zoned": false, 00:10:34.747 "supported_io_types": { 00:10:34.747 "read": true, 00:10:34.747 "write": true, 00:10:34.747 "unmap": true, 00:10:34.747 "flush": true, 00:10:34.747 "reset": true, 00:10:34.747 "nvme_admin": false, 00:10:34.747 "nvme_io": false, 00:10:34.747 "nvme_io_md": false, 00:10:34.747 "write_zeroes": true, 00:10:34.747 "zcopy": true, 00:10:34.747 "get_zone_info": false, 00:10:34.747 "zone_management": false, 00:10:34.747 "zone_append": false, 00:10:34.747 "compare": false, 00:10:34.747 "compare_and_write": false, 00:10:34.747 "abort": true, 00:10:34.747 "seek_hole": false, 00:10:34.747 "seek_data": false, 00:10:34.747 "copy": true, 00:10:34.747 "nvme_iov_md": false 00:10:34.747 }, 00:10:34.747 "memory_domains": [ 00:10:34.747 { 00:10:34.747 "dma_device_id": "system", 00:10:34.747 "dma_device_type": 1 00:10:34.747 }, 00:10:34.747 { 00:10:34.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.747 "dma_device_type": 2 00:10:34.747 } 00:10:34.747 ], 00:10:34.747 "driver_specific": {} 00:10:34.747 } 00:10:34.747 ] 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.747 19:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.747 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.747 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.747 "name": "Existed_Raid", 00:10:34.747 "uuid": "9fce5657-e4b4-4d12-ad28-b0c2ece6e87b", 00:10:34.747 "strip_size_kb": 64, 00:10:34.747 "state": "online", 00:10:34.747 "raid_level": "raid0", 00:10:34.747 "superblock": false, 00:10:34.747 "num_base_bdevs": 2, 00:10:34.747 "num_base_bdevs_discovered": 2, 00:10:34.747 "num_base_bdevs_operational": 2, 00:10:34.747 "base_bdevs_list": [ 00:10:34.747 { 00:10:34.747 "name": "BaseBdev1", 00:10:34.747 "uuid": "86e5dc57-85d4-40dd-9650-dd26993dbc7e", 00:10:34.747 "is_configured": true, 00:10:34.747 "data_offset": 0, 00:10:34.747 "data_size": 65536 00:10:34.747 }, 00:10:34.747 { 00:10:34.747 "name": "BaseBdev2", 00:10:34.747 "uuid": "dcd92b78-0132-4fc7-ad32-a5e5112d78a0", 00:10:34.747 "is_configured": true, 00:10:34.747 "data_offset": 0, 00:10:34.747 "data_size": 65536 00:10:34.747 } 00:10:34.747 ] 00:10:34.747 }' 00:10:34.747 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.747 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.315 [2024-12-05 19:30:28.497593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.315 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.315 "name": "Existed_Raid", 00:10:35.315 "aliases": [ 00:10:35.315 "9fce5657-e4b4-4d12-ad28-b0c2ece6e87b" 00:10:35.315 ], 00:10:35.315 "product_name": "Raid Volume", 00:10:35.315 "block_size": 512, 00:10:35.315 "num_blocks": 131072, 00:10:35.315 "uuid": "9fce5657-e4b4-4d12-ad28-b0c2ece6e87b", 00:10:35.315 "assigned_rate_limits": { 00:10:35.316 "rw_ios_per_sec": 0, 00:10:35.316 "rw_mbytes_per_sec": 0, 00:10:35.316 "r_mbytes_per_sec": 0, 00:10:35.316 "w_mbytes_per_sec": 0 00:10:35.316 }, 00:10:35.316 "claimed": false, 00:10:35.316 "zoned": false, 00:10:35.316 "supported_io_types": { 00:10:35.316 "read": true, 00:10:35.316 "write": true, 00:10:35.316 "unmap": true, 00:10:35.316 "flush": true, 00:10:35.316 "reset": true, 00:10:35.316 "nvme_admin": false, 00:10:35.316 "nvme_io": false, 00:10:35.316 "nvme_io_md": false, 00:10:35.316 "write_zeroes": true, 00:10:35.316 "zcopy": false, 00:10:35.316 "get_zone_info": false, 00:10:35.316 "zone_management": false, 00:10:35.316 "zone_append": false, 00:10:35.316 "compare": false, 00:10:35.316 "compare_and_write": false, 00:10:35.316 "abort": false, 00:10:35.316 "seek_hole": false, 00:10:35.316 "seek_data": false, 00:10:35.316 "copy": false, 00:10:35.316 "nvme_iov_md": false 00:10:35.316 }, 00:10:35.316 "memory_domains": [ 00:10:35.316 { 00:10:35.316 "dma_device_id": "system", 00:10:35.316 "dma_device_type": 1 00:10:35.316 }, 00:10:35.316 { 00:10:35.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.316 "dma_device_type": 2 00:10:35.316 }, 00:10:35.316 { 00:10:35.316 "dma_device_id": "system", 00:10:35.316 "dma_device_type": 1 00:10:35.316 }, 00:10:35.316 { 00:10:35.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.316 "dma_device_type": 2 00:10:35.316 } 00:10:35.316 ], 00:10:35.316 "driver_specific": { 00:10:35.316 "raid": { 00:10:35.316 "uuid": "9fce5657-e4b4-4d12-ad28-b0c2ece6e87b", 00:10:35.316 "strip_size_kb": 64, 00:10:35.316 "state": "online", 00:10:35.316 "raid_level": "raid0", 00:10:35.316 "superblock": false, 00:10:35.316 "num_base_bdevs": 2, 00:10:35.316 "num_base_bdevs_discovered": 2, 00:10:35.316 "num_base_bdevs_operational": 2, 00:10:35.316 "base_bdevs_list": [ 00:10:35.316 { 00:10:35.316 "name": "BaseBdev1", 00:10:35.316 "uuid": "86e5dc57-85d4-40dd-9650-dd26993dbc7e", 00:10:35.316 "is_configured": true, 00:10:35.316 "data_offset": 0, 00:10:35.316 "data_size": 65536 00:10:35.316 }, 00:10:35.316 { 00:10:35.316 "name": "BaseBdev2", 00:10:35.316 "uuid": "dcd92b78-0132-4fc7-ad32-a5e5112d78a0", 00:10:35.316 "is_configured": true, 00:10:35.316 "data_offset": 0, 00:10:35.316 "data_size": 65536 00:10:35.316 } 00:10:35.316 ] 00:10:35.316 } 00:10:35.316 } 00:10:35.316 }' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.316 BaseBdev2' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.316 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.316 [2024-12-05 19:30:28.741340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.316 [2024-12-05 19:30:28.741389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.316 [2024-12-05 19:30:28.741460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.575 "name": "Existed_Raid", 00:10:35.575 "uuid": "9fce5657-e4b4-4d12-ad28-b0c2ece6e87b", 00:10:35.575 "strip_size_kb": 64, 00:10:35.575 "state": "offline", 00:10:35.575 "raid_level": "raid0", 00:10:35.575 "superblock": false, 00:10:35.575 "num_base_bdevs": 2, 00:10:35.575 "num_base_bdevs_discovered": 1, 00:10:35.575 "num_base_bdevs_operational": 1, 00:10:35.575 "base_bdevs_list": [ 00:10:35.575 { 00:10:35.575 "name": null, 00:10:35.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.575 "is_configured": false, 00:10:35.575 "data_offset": 0, 00:10:35.575 "data_size": 65536 00:10:35.575 }, 00:10:35.575 { 00:10:35.575 "name": "BaseBdev2", 00:10:35.575 "uuid": "dcd92b78-0132-4fc7-ad32-a5e5112d78a0", 00:10:35.575 "is_configured": true, 00:10:35.575 "data_offset": 0, 00:10:35.575 "data_size": 65536 00:10:35.575 } 00:10:35.575 ] 00:10:35.575 }' 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.575 19:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.140 [2024-12-05 19:30:29.372635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.140 [2024-12-05 19:30:29.372735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60621 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60621 ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60621 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60621 00:10:36.140 killing process with pid 60621 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60621' 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60621 00:10:36.140 [2024-12-05 19:30:29.546048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.140 19:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60621 00:10:36.140 [2024-12-05 19:30:29.561354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.515 ************************************ 00:10:37.515 END TEST raid_state_function_test 00:10:37.515 ************************************ 00:10:37.515 19:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.515 00:10:37.515 real 0m5.471s 00:10:37.515 user 0m8.278s 00:10:37.515 sys 0m0.766s 00:10:37.515 19:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.515 19:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.516 19:30:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:37.516 19:30:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.516 19:30:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.516 19:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.516 ************************************ 00:10:37.516 START TEST raid_state_function_test_sb 00:10:37.516 ************************************ 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.516 Process raid pid: 60879 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60879 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60879' 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60879 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60879 ']' 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.516 19:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.516 [2024-12-05 19:30:30.782425] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:37.516 [2024-12-05 19:30:30.783546] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.775 [2024-12-05 19:30:30.972385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.775 [2024-12-05 19:30:31.134667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.034 [2024-12-05 19:30:31.348366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.034 [2024-12-05 19:30:31.348644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 [2024-12-05 19:30:31.806112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.617 [2024-12-05 19:30:31.806368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.617 [2024-12-05 19:30:31.806495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.617 [2024-12-05 19:30:31.806530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.617 "name": "Existed_Raid", 00:10:38.617 "uuid": "676183a6-db11-47dd-9ed6-766032b636a0", 00:10:38.617 "strip_size_kb": 64, 00:10:38.617 "state": "configuring", 00:10:38.617 "raid_level": "raid0", 00:10:38.617 "superblock": true, 00:10:38.617 "num_base_bdevs": 2, 00:10:38.617 "num_base_bdevs_discovered": 0, 00:10:38.617 "num_base_bdevs_operational": 2, 00:10:38.617 "base_bdevs_list": [ 00:10:38.617 { 00:10:38.617 "name": "BaseBdev1", 00:10:38.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.617 "is_configured": false, 00:10:38.617 "data_offset": 0, 00:10:38.617 "data_size": 0 00:10:38.617 }, 00:10:38.617 { 00:10:38.617 "name": "BaseBdev2", 00:10:38.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.617 "is_configured": false, 00:10:38.617 "data_offset": 0, 00:10:38.617 "data_size": 0 00:10:38.617 } 00:10:38.617 ] 00:10:38.617 }' 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.617 19:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.185 [2024-12-05 19:30:32.326150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.185 [2024-12-05 19:30:32.326193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.185 [2024-12-05 19:30:32.338192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.185 [2024-12-05 19:30:32.338381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.185 [2024-12-05 19:30:32.338526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.185 [2024-12-05 19:30:32.338595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.185 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 [2024-12-05 19:30:32.383846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.186 BaseBdev1 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 [ 00:10:39.186 { 00:10:39.186 "name": "BaseBdev1", 00:10:39.186 "aliases": [ 00:10:39.186 "12158a32-6b2d-44ea-bee9-287d2b265010" 00:10:39.186 ], 00:10:39.186 "product_name": "Malloc disk", 00:10:39.186 "block_size": 512, 00:10:39.186 "num_blocks": 65536, 00:10:39.186 "uuid": "12158a32-6b2d-44ea-bee9-287d2b265010", 00:10:39.186 "assigned_rate_limits": { 00:10:39.186 "rw_ios_per_sec": 0, 00:10:39.186 "rw_mbytes_per_sec": 0, 00:10:39.186 "r_mbytes_per_sec": 0, 00:10:39.186 "w_mbytes_per_sec": 0 00:10:39.186 }, 00:10:39.186 "claimed": true, 00:10:39.186 "claim_type": "exclusive_write", 00:10:39.186 "zoned": false, 00:10:39.186 "supported_io_types": { 00:10:39.186 "read": true, 00:10:39.186 "write": true, 00:10:39.186 "unmap": true, 00:10:39.186 "flush": true, 00:10:39.186 "reset": true, 00:10:39.186 "nvme_admin": false, 00:10:39.186 "nvme_io": false, 00:10:39.186 "nvme_io_md": false, 00:10:39.186 "write_zeroes": true, 00:10:39.186 "zcopy": true, 00:10:39.186 "get_zone_info": false, 00:10:39.186 "zone_management": false, 00:10:39.186 "zone_append": false, 00:10:39.186 "compare": false, 00:10:39.186 "compare_and_write": false, 00:10:39.186 "abort": true, 00:10:39.186 "seek_hole": false, 00:10:39.186 "seek_data": false, 00:10:39.186 "copy": true, 00:10:39.186 "nvme_iov_md": false 00:10:39.186 }, 00:10:39.186 "memory_domains": [ 00:10:39.186 { 00:10:39.186 "dma_device_id": "system", 00:10:39.186 "dma_device_type": 1 00:10:39.186 }, 00:10:39.186 { 00:10:39.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.186 "dma_device_type": 2 00:10:39.186 } 00:10:39.186 ], 00:10:39.186 "driver_specific": {} 00:10:39.186 } 00:10:39.186 ] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.186 "name": "Existed_Raid", 00:10:39.186 "uuid": "cd58e5ed-882e-4ad9-815d-fd4c043250db", 00:10:39.186 "strip_size_kb": 64, 00:10:39.186 "state": "configuring", 00:10:39.186 "raid_level": "raid0", 00:10:39.186 "superblock": true, 00:10:39.186 "num_base_bdevs": 2, 00:10:39.186 "num_base_bdevs_discovered": 1, 00:10:39.186 "num_base_bdevs_operational": 2, 00:10:39.186 "base_bdevs_list": [ 00:10:39.186 { 00:10:39.186 "name": "BaseBdev1", 00:10:39.186 "uuid": "12158a32-6b2d-44ea-bee9-287d2b265010", 00:10:39.186 "is_configured": true, 00:10:39.186 "data_offset": 2048, 00:10:39.186 "data_size": 63488 00:10:39.186 }, 00:10:39.186 { 00:10:39.186 "name": "BaseBdev2", 00:10:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.186 "is_configured": false, 00:10:39.186 "data_offset": 0, 00:10:39.186 "data_size": 0 00:10:39.186 } 00:10:39.186 ] 00:10:39.186 }' 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.186 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.753 [2024-12-05 19:30:32.908079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.753 [2024-12-05 19:30:32.908282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.753 [2024-12-05 19:30:32.916126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.753 [2024-12-05 19:30:32.918673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.753 [2024-12-05 19:30:32.918740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.753 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.754 "name": "Existed_Raid", 00:10:39.754 "uuid": "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4", 00:10:39.754 "strip_size_kb": 64, 00:10:39.754 "state": "configuring", 00:10:39.754 "raid_level": "raid0", 00:10:39.754 "superblock": true, 00:10:39.754 "num_base_bdevs": 2, 00:10:39.754 "num_base_bdevs_discovered": 1, 00:10:39.754 "num_base_bdevs_operational": 2, 00:10:39.754 "base_bdevs_list": [ 00:10:39.754 { 00:10:39.754 "name": "BaseBdev1", 00:10:39.754 "uuid": "12158a32-6b2d-44ea-bee9-287d2b265010", 00:10:39.754 "is_configured": true, 00:10:39.754 "data_offset": 2048, 00:10:39.754 "data_size": 63488 00:10:39.754 }, 00:10:39.754 { 00:10:39.754 "name": "BaseBdev2", 00:10:39.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.754 "is_configured": false, 00:10:39.754 "data_offset": 0, 00:10:39.754 "data_size": 0 00:10:39.754 } 00:10:39.754 ] 00:10:39.754 }' 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.754 19:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.012 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.012 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.270 [2024-12-05 19:30:33.478768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.270 [2024-12-05 19:30:33.479317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.270 [2024-12-05 19:30:33.479465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:40.270 BaseBdev2 00:10:40.270 [2024-12-05 19:30:33.479990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:40.270 [2024-12-05 19:30:33.480202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.270 [2024-12-05 19:30:33.480236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.270 [2024-12-05 19:30:33.480411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.270 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 [ 00:10:40.271 { 00:10:40.271 "name": "BaseBdev2", 00:10:40.271 "aliases": [ 00:10:40.271 "bbd0c58a-e9bd-48ba-a849-9221d49391d8" 00:10:40.271 ], 00:10:40.271 "product_name": "Malloc disk", 00:10:40.271 "block_size": 512, 00:10:40.271 "num_blocks": 65536, 00:10:40.271 "uuid": "bbd0c58a-e9bd-48ba-a849-9221d49391d8", 00:10:40.271 "assigned_rate_limits": { 00:10:40.271 "rw_ios_per_sec": 0, 00:10:40.271 "rw_mbytes_per_sec": 0, 00:10:40.271 "r_mbytes_per_sec": 0, 00:10:40.271 "w_mbytes_per_sec": 0 00:10:40.271 }, 00:10:40.271 "claimed": true, 00:10:40.271 "claim_type": "exclusive_write", 00:10:40.271 "zoned": false, 00:10:40.271 "supported_io_types": { 00:10:40.271 "read": true, 00:10:40.271 "write": true, 00:10:40.271 "unmap": true, 00:10:40.271 "flush": true, 00:10:40.271 "reset": true, 00:10:40.271 "nvme_admin": false, 00:10:40.271 "nvme_io": false, 00:10:40.271 "nvme_io_md": false, 00:10:40.271 "write_zeroes": true, 00:10:40.271 "zcopy": true, 00:10:40.271 "get_zone_info": false, 00:10:40.271 "zone_management": false, 00:10:40.271 "zone_append": false, 00:10:40.271 "compare": false, 00:10:40.271 "compare_and_write": false, 00:10:40.271 "abort": true, 00:10:40.271 "seek_hole": false, 00:10:40.271 "seek_data": false, 00:10:40.271 "copy": true, 00:10:40.271 "nvme_iov_md": false 00:10:40.271 }, 00:10:40.271 "memory_domains": [ 00:10:40.271 { 00:10:40.271 "dma_device_id": "system", 00:10:40.271 "dma_device_type": 1 00:10:40.271 }, 00:10:40.271 { 00:10:40.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.271 "dma_device_type": 2 00:10:40.271 } 00:10:40.271 ], 00:10:40.271 "driver_specific": {} 00:10:40.271 } 00:10:40.271 ] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.271 "name": "Existed_Raid", 00:10:40.271 "uuid": "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4", 00:10:40.271 "strip_size_kb": 64, 00:10:40.271 "state": "online", 00:10:40.271 "raid_level": "raid0", 00:10:40.271 "superblock": true, 00:10:40.271 "num_base_bdevs": 2, 00:10:40.271 "num_base_bdevs_discovered": 2, 00:10:40.271 "num_base_bdevs_operational": 2, 00:10:40.271 "base_bdevs_list": [ 00:10:40.271 { 00:10:40.271 "name": "BaseBdev1", 00:10:40.271 "uuid": "12158a32-6b2d-44ea-bee9-287d2b265010", 00:10:40.271 "is_configured": true, 00:10:40.271 "data_offset": 2048, 00:10:40.271 "data_size": 63488 00:10:40.271 }, 00:10:40.271 { 00:10:40.271 "name": "BaseBdev2", 00:10:40.271 "uuid": "bbd0c58a-e9bd-48ba-a849-9221d49391d8", 00:10:40.271 "is_configured": true, 00:10:40.271 "data_offset": 2048, 00:10:40.271 "data_size": 63488 00:10:40.271 } 00:10:40.271 ] 00:10:40.271 }' 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.271 19:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.837 [2024-12-05 19:30:34.067444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.837 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.837 "name": "Existed_Raid", 00:10:40.837 "aliases": [ 00:10:40.837 "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4" 00:10:40.837 ], 00:10:40.837 "product_name": "Raid Volume", 00:10:40.837 "block_size": 512, 00:10:40.837 "num_blocks": 126976, 00:10:40.838 "uuid": "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4", 00:10:40.838 "assigned_rate_limits": { 00:10:40.838 "rw_ios_per_sec": 0, 00:10:40.838 "rw_mbytes_per_sec": 0, 00:10:40.838 "r_mbytes_per_sec": 0, 00:10:40.838 "w_mbytes_per_sec": 0 00:10:40.838 }, 00:10:40.838 "claimed": false, 00:10:40.838 "zoned": false, 00:10:40.838 "supported_io_types": { 00:10:40.838 "read": true, 00:10:40.838 "write": true, 00:10:40.838 "unmap": true, 00:10:40.838 "flush": true, 00:10:40.838 "reset": true, 00:10:40.838 "nvme_admin": false, 00:10:40.838 "nvme_io": false, 00:10:40.838 "nvme_io_md": false, 00:10:40.838 "write_zeroes": true, 00:10:40.838 "zcopy": false, 00:10:40.838 "get_zone_info": false, 00:10:40.838 "zone_management": false, 00:10:40.838 "zone_append": false, 00:10:40.838 "compare": false, 00:10:40.838 "compare_and_write": false, 00:10:40.838 "abort": false, 00:10:40.838 "seek_hole": false, 00:10:40.838 "seek_data": false, 00:10:40.838 "copy": false, 00:10:40.838 "nvme_iov_md": false 00:10:40.838 }, 00:10:40.838 "memory_domains": [ 00:10:40.838 { 00:10:40.838 "dma_device_id": "system", 00:10:40.838 "dma_device_type": 1 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.838 "dma_device_type": 2 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "dma_device_id": "system", 00:10:40.838 "dma_device_type": 1 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.838 "dma_device_type": 2 00:10:40.838 } 00:10:40.838 ], 00:10:40.838 "driver_specific": { 00:10:40.838 "raid": { 00:10:40.838 "uuid": "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4", 00:10:40.838 "strip_size_kb": 64, 00:10:40.838 "state": "online", 00:10:40.838 "raid_level": "raid0", 00:10:40.838 "superblock": true, 00:10:40.838 "num_base_bdevs": 2, 00:10:40.838 "num_base_bdevs_discovered": 2, 00:10:40.838 "num_base_bdevs_operational": 2, 00:10:40.838 "base_bdevs_list": [ 00:10:40.838 { 00:10:40.838 "name": "BaseBdev1", 00:10:40.838 "uuid": "12158a32-6b2d-44ea-bee9-287d2b265010", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 }, 00:10:40.838 { 00:10:40.838 "name": "BaseBdev2", 00:10:40.838 "uuid": "bbd0c58a-e9bd-48ba-a849-9221d49391d8", 00:10:40.838 "is_configured": true, 00:10:40.838 "data_offset": 2048, 00:10:40.838 "data_size": 63488 00:10:40.838 } 00:10:40.838 ] 00:10:40.838 } 00:10:40.838 } 00:10:40.838 }' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.838 BaseBdev2' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.838 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.097 [2024-12-05 19:30:34.323142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.097 [2024-12-05 19:30:34.323187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.097 [2024-12-05 19:30:34.323271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.097 "name": "Existed_Raid", 00:10:41.097 "uuid": "ba8f2ca7-faf0-4749-a7b3-d080d58ddfc4", 00:10:41.097 "strip_size_kb": 64, 00:10:41.097 "state": "offline", 00:10:41.097 "raid_level": "raid0", 00:10:41.097 "superblock": true, 00:10:41.097 "num_base_bdevs": 2, 00:10:41.097 "num_base_bdevs_discovered": 1, 00:10:41.097 "num_base_bdevs_operational": 1, 00:10:41.097 "base_bdevs_list": [ 00:10:41.097 { 00:10:41.097 "name": null, 00:10:41.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.097 "is_configured": false, 00:10:41.097 "data_offset": 0, 00:10:41.097 "data_size": 63488 00:10:41.097 }, 00:10:41.097 { 00:10:41.097 "name": "BaseBdev2", 00:10:41.097 "uuid": "bbd0c58a-e9bd-48ba-a849-9221d49391d8", 00:10:41.097 "is_configured": true, 00:10:41.097 "data_offset": 2048, 00:10:41.097 "data_size": 63488 00:10:41.097 } 00:10:41.097 ] 00:10:41.097 }' 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.097 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.666 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.667 19:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.667 [2024-12-05 19:30:34.999597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.667 [2024-12-05 19:30:34.999666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.667 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60879 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60879 ']' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60879 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60879 00:10:41.924 killing process with pid 60879 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60879' 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60879 00:10:41.924 [2024-12-05 19:30:35.178556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.924 19:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60879 00:10:41.924 [2024-12-05 19:30:35.194296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.859 19:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.859 00:10:42.859 real 0m5.579s 00:10:42.859 user 0m8.463s 00:10:42.859 sys 0m0.763s 00:10:42.859 ************************************ 00:10:42.859 END TEST raid_state_function_test_sb 00:10:42.859 ************************************ 00:10:42.859 19:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.859 19:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.859 19:30:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:42.859 19:30:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.859 19:30:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.859 19:30:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.119 ************************************ 00:10:43.119 START TEST raid_superblock_test 00:10:43.119 ************************************ 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61137 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61137 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61137 ']' 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.119 19:30:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.119 [2024-12-05 19:30:36.419813] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:43.119 [2024-12-05 19:30:36.420330] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:10:43.378 [2024-12-05 19:30:36.598940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.378 [2024-12-05 19:30:36.720599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.685 [2024-12-05 19:30:36.919636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.685 [2024-12-05 19:30:36.919694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.956 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.215 malloc1 00:10:44.215 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.215 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.215 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 [2024-12-05 19:30:37.434327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.216 [2024-12-05 19:30:37.434429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.216 [2024-12-05 19:30:37.434469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:44.216 [2024-12-05 19:30:37.434486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.216 [2024-12-05 19:30:37.437577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.216 [2024-12-05 19:30:37.437626] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.216 pt1 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 malloc2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 [2024-12-05 19:30:37.486045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.216 [2024-12-05 19:30:37.486162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.216 [2024-12-05 19:30:37.486202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:44.216 [2024-12-05 19:30:37.486217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.216 [2024-12-05 19:30:37.489222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.216 [2024-12-05 19:30:37.489281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.216 pt2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 [2024-12-05 19:30:37.494179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.216 [2024-12-05 19:30:37.496641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.216 [2024-12-05 19:30:37.497088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:44.216 [2024-12-05 19:30:37.497114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:44.216 [2024-12-05 19:30:37.497454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:44.216 [2024-12-05 19:30:37.497646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:44.216 [2024-12-05 19:30:37.497666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:44.216 [2024-12-05 19:30:37.497919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.216 "name": "raid_bdev1", 00:10:44.216 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:44.216 "strip_size_kb": 64, 00:10:44.216 "state": "online", 00:10:44.216 "raid_level": "raid0", 00:10:44.216 "superblock": true, 00:10:44.216 "num_base_bdevs": 2, 00:10:44.216 "num_base_bdevs_discovered": 2, 00:10:44.216 "num_base_bdevs_operational": 2, 00:10:44.216 "base_bdevs_list": [ 00:10:44.216 { 00:10:44.216 "name": "pt1", 00:10:44.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.216 "is_configured": true, 00:10:44.216 "data_offset": 2048, 00:10:44.216 "data_size": 63488 00:10:44.216 }, 00:10:44.216 { 00:10:44.216 "name": "pt2", 00:10:44.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.216 "is_configured": true, 00:10:44.216 "data_offset": 2048, 00:10:44.216 "data_size": 63488 00:10:44.216 } 00:10:44.216 ] 00:10:44.216 }' 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.216 19:30:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.784 [2024-12-05 19:30:38.022819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.784 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.784 "name": "raid_bdev1", 00:10:44.784 "aliases": [ 00:10:44.784 "aebb0583-1287-4c0b-a717-5971d007fbd5" 00:10:44.784 ], 00:10:44.784 "product_name": "Raid Volume", 00:10:44.784 "block_size": 512, 00:10:44.784 "num_blocks": 126976, 00:10:44.784 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:44.784 "assigned_rate_limits": { 00:10:44.784 "rw_ios_per_sec": 0, 00:10:44.784 "rw_mbytes_per_sec": 0, 00:10:44.784 "r_mbytes_per_sec": 0, 00:10:44.784 "w_mbytes_per_sec": 0 00:10:44.784 }, 00:10:44.784 "claimed": false, 00:10:44.784 "zoned": false, 00:10:44.784 "supported_io_types": { 00:10:44.784 "read": true, 00:10:44.784 "write": true, 00:10:44.784 "unmap": true, 00:10:44.784 "flush": true, 00:10:44.784 "reset": true, 00:10:44.784 "nvme_admin": false, 00:10:44.784 "nvme_io": false, 00:10:44.784 "nvme_io_md": false, 00:10:44.784 "write_zeroes": true, 00:10:44.784 "zcopy": false, 00:10:44.784 "get_zone_info": false, 00:10:44.784 "zone_management": false, 00:10:44.784 "zone_append": false, 00:10:44.784 "compare": false, 00:10:44.784 "compare_and_write": false, 00:10:44.784 "abort": false, 00:10:44.784 "seek_hole": false, 00:10:44.784 "seek_data": false, 00:10:44.784 "copy": false, 00:10:44.784 "nvme_iov_md": false 00:10:44.784 }, 00:10:44.784 "memory_domains": [ 00:10:44.784 { 00:10:44.784 "dma_device_id": "system", 00:10:44.784 "dma_device_type": 1 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.784 "dma_device_type": 2 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "dma_device_id": "system", 00:10:44.784 "dma_device_type": 1 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.784 "dma_device_type": 2 00:10:44.784 } 00:10:44.784 ], 00:10:44.784 "driver_specific": { 00:10:44.784 "raid": { 00:10:44.784 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:44.784 "strip_size_kb": 64, 00:10:44.784 "state": "online", 00:10:44.784 "raid_level": "raid0", 00:10:44.784 "superblock": true, 00:10:44.784 "num_base_bdevs": 2, 00:10:44.784 "num_base_bdevs_discovered": 2, 00:10:44.784 "num_base_bdevs_operational": 2, 00:10:44.784 "base_bdevs_list": [ 00:10:44.784 { 00:10:44.784 "name": "pt1", 00:10:44.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.784 "is_configured": true, 00:10:44.784 "data_offset": 2048, 00:10:44.784 "data_size": 63488 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "name": "pt2", 00:10:44.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.784 "is_configured": true, 00:10:44.784 "data_offset": 2048, 00:10:44.784 "data_size": 63488 00:10:44.784 } 00:10:44.785 ] 00:10:44.785 } 00:10:44.785 } 00:10:44.785 }' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.785 pt2' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.785 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:45.044 [2024-12-05 19:30:38.250777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aebb0583-1287-4c0b-a717-5971d007fbd5 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aebb0583-1287-4c0b-a717-5971d007fbd5 ']' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 [2024-12-05 19:30:38.302420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.044 [2024-12-05 19:30:38.302625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.044 [2024-12-05 19:30:38.302845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.044 [2024-12-05 19:30:38.303012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.044 [2024-12-05 19:30:38.303198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 [2024-12-05 19:30:38.442480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.044 [2024-12-05 19:30:38.445241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.044 [2024-12-05 19:30:38.445335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:45.044 [2024-12-05 19:30:38.445421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:45.044 [2024-12-05 19:30:38.445446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.044 [2024-12-05 19:30:38.445463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:45.044 request: 00:10:45.044 { 00:10:45.044 "name": "raid_bdev1", 00:10:45.044 "raid_level": "raid0", 00:10:45.044 "base_bdevs": [ 00:10:45.044 "malloc1", 00:10:45.044 "malloc2" 00:10:45.044 ], 00:10:45.044 "strip_size_kb": 64, 00:10:45.044 "superblock": false, 00:10:45.044 "method": "bdev_raid_create", 00:10:45.044 "req_id": 1 00:10:45.044 } 00:10:45.044 Got JSON-RPC error response 00:10:45.044 response: 00:10:45.044 { 00:10:45.044 "code": -17, 00:10:45.044 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.044 } 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:45.044 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.303 [2024-12-05 19:30:38.510490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.303 [2024-12-05 19:30:38.510756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.303 [2024-12-05 19:30:38.510831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:45.303 [2024-12-05 19:30:38.511011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.303 [2024-12-05 19:30:38.514193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.303 [2024-12-05 19:30:38.514388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.303 [2024-12-05 19:30:38.514597] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:45.303 [2024-12-05 19:30:38.514799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.303 pt1 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:45.303 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.304 "name": "raid_bdev1", 00:10:45.304 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:45.304 "strip_size_kb": 64, 00:10:45.304 "state": "configuring", 00:10:45.304 "raid_level": "raid0", 00:10:45.304 "superblock": true, 00:10:45.304 "num_base_bdevs": 2, 00:10:45.304 "num_base_bdevs_discovered": 1, 00:10:45.304 "num_base_bdevs_operational": 2, 00:10:45.304 "base_bdevs_list": [ 00:10:45.304 { 00:10:45.304 "name": "pt1", 00:10:45.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.304 "is_configured": true, 00:10:45.304 "data_offset": 2048, 00:10:45.304 "data_size": 63488 00:10:45.304 }, 00:10:45.304 { 00:10:45.304 "name": null, 00:10:45.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.304 "is_configured": false, 00:10:45.304 "data_offset": 2048, 00:10:45.304 "data_size": 63488 00:10:45.304 } 00:10:45.304 ] 00:10:45.304 }' 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.304 19:30:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.870 [2024-12-05 19:30:39.030884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.870 [2024-12-05 19:30:39.030980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.870 [2024-12-05 19:30:39.031015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:45.870 [2024-12-05 19:30:39.031034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.870 [2024-12-05 19:30:39.031617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.870 [2024-12-05 19:30:39.031649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.870 [2024-12-05 19:30:39.031771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.870 [2024-12-05 19:30:39.031813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.870 [2024-12-05 19:30:39.031963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.870 [2024-12-05 19:30:39.031985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:45.870 [2024-12-05 19:30:39.032292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:45.870 [2024-12-05 19:30:39.032479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.870 [2024-12-05 19:30:39.032495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:45.870 [2024-12-05 19:30:39.032671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.870 pt2 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.870 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.871 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.871 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.871 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.871 "name": "raid_bdev1", 00:10:45.871 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:45.871 "strip_size_kb": 64, 00:10:45.871 "state": "online", 00:10:45.871 "raid_level": "raid0", 00:10:45.871 "superblock": true, 00:10:45.871 "num_base_bdevs": 2, 00:10:45.871 "num_base_bdevs_discovered": 2, 00:10:45.871 "num_base_bdevs_operational": 2, 00:10:45.871 "base_bdevs_list": [ 00:10:45.871 { 00:10:45.871 "name": "pt1", 00:10:45.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.871 "is_configured": true, 00:10:45.871 "data_offset": 2048, 00:10:45.871 "data_size": 63488 00:10:45.871 }, 00:10:45.871 { 00:10:45.871 "name": "pt2", 00:10:45.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.871 "is_configured": true, 00:10:45.871 "data_offset": 2048, 00:10:45.871 "data_size": 63488 00:10:45.871 } 00:10:45.871 ] 00:10:45.871 }' 00:10:45.871 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.871 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.129 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.129 [2024-12-05 19:30:39.559430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.387 "name": "raid_bdev1", 00:10:46.387 "aliases": [ 00:10:46.387 "aebb0583-1287-4c0b-a717-5971d007fbd5" 00:10:46.387 ], 00:10:46.387 "product_name": "Raid Volume", 00:10:46.387 "block_size": 512, 00:10:46.387 "num_blocks": 126976, 00:10:46.387 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:46.387 "assigned_rate_limits": { 00:10:46.387 "rw_ios_per_sec": 0, 00:10:46.387 "rw_mbytes_per_sec": 0, 00:10:46.387 "r_mbytes_per_sec": 0, 00:10:46.387 "w_mbytes_per_sec": 0 00:10:46.387 }, 00:10:46.387 "claimed": false, 00:10:46.387 "zoned": false, 00:10:46.387 "supported_io_types": { 00:10:46.387 "read": true, 00:10:46.387 "write": true, 00:10:46.387 "unmap": true, 00:10:46.387 "flush": true, 00:10:46.387 "reset": true, 00:10:46.387 "nvme_admin": false, 00:10:46.387 "nvme_io": false, 00:10:46.387 "nvme_io_md": false, 00:10:46.387 "write_zeroes": true, 00:10:46.387 "zcopy": false, 00:10:46.387 "get_zone_info": false, 00:10:46.387 "zone_management": false, 00:10:46.387 "zone_append": false, 00:10:46.387 "compare": false, 00:10:46.387 "compare_and_write": false, 00:10:46.387 "abort": false, 00:10:46.387 "seek_hole": false, 00:10:46.387 "seek_data": false, 00:10:46.387 "copy": false, 00:10:46.387 "nvme_iov_md": false 00:10:46.387 }, 00:10:46.387 "memory_domains": [ 00:10:46.387 { 00:10:46.387 "dma_device_id": "system", 00:10:46.387 "dma_device_type": 1 00:10:46.387 }, 00:10:46.387 { 00:10:46.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.387 "dma_device_type": 2 00:10:46.387 }, 00:10:46.387 { 00:10:46.387 "dma_device_id": "system", 00:10:46.387 "dma_device_type": 1 00:10:46.387 }, 00:10:46.387 { 00:10:46.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.387 "dma_device_type": 2 00:10:46.387 } 00:10:46.387 ], 00:10:46.387 "driver_specific": { 00:10:46.387 "raid": { 00:10:46.387 "uuid": "aebb0583-1287-4c0b-a717-5971d007fbd5", 00:10:46.387 "strip_size_kb": 64, 00:10:46.387 "state": "online", 00:10:46.387 "raid_level": "raid0", 00:10:46.387 "superblock": true, 00:10:46.387 "num_base_bdevs": 2, 00:10:46.387 "num_base_bdevs_discovered": 2, 00:10:46.387 "num_base_bdevs_operational": 2, 00:10:46.387 "base_bdevs_list": [ 00:10:46.387 { 00:10:46.387 "name": "pt1", 00:10:46.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.387 "is_configured": true, 00:10:46.387 "data_offset": 2048, 00:10:46.387 "data_size": 63488 00:10:46.387 }, 00:10:46.387 { 00:10:46.387 "name": "pt2", 00:10:46.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.387 "is_configured": true, 00:10:46.387 "data_offset": 2048, 00:10:46.387 "data_size": 63488 00:10:46.387 } 00:10:46.387 ] 00:10:46.387 } 00:10:46.387 } 00:10:46.387 }' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.387 pt2' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.387 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.646 [2024-12-05 19:30:39.827513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aebb0583-1287-4c0b-a717-5971d007fbd5 '!=' aebb0583-1287-4c0b-a717-5971d007fbd5 ']' 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61137 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61137 ']' 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61137 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61137 00:10:46.646 killing process with pid 61137 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61137' 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61137 00:10:46.646 [2024-12-05 19:30:39.905919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.646 19:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61137 00:10:46.646 [2024-12-05 19:30:39.906033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.646 [2024-12-05 19:30:39.906128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.646 [2024-12-05 19:30:39.906146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:46.905 [2024-12-05 19:30:40.094965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.837 19:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:47.837 00:10:47.837 real 0m4.838s 00:10:47.837 user 0m7.124s 00:10:47.837 sys 0m0.686s 00:10:47.837 ************************************ 00:10:47.837 END TEST raid_superblock_test 00:10:47.837 ************************************ 00:10:47.837 19:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.837 19:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.837 19:30:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:47.837 19:30:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.837 19:30:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.837 19:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.837 ************************************ 00:10:47.837 START TEST raid_read_error_test 00:10:47.837 ************************************ 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LZ1Vri7EQ3 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61347 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61347 00:10:47.837 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61347 ']' 00:10:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.838 19:30:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.096 [2024-12-05 19:30:41.323439] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:48.096 [2024-12-05 19:30:41.323642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:10:48.096 [2024-12-05 19:30:41.515641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.354 [2024-12-05 19:30:41.668428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.613 [2024-12-05 19:30:41.869066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.613 [2024-12-05 19:30:41.869142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.871 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 BaseBdev1_malloc 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 true 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 [2024-12-05 19:30:42.333062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:49.130 [2024-12-05 19:30:42.333149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.130 [2024-12-05 19:30:42.333181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:49.130 [2024-12-05 19:30:42.333199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.130 [2024-12-05 19:30:42.336158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.130 [2024-12-05 19:30:42.336419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:49.130 BaseBdev1 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 BaseBdev2_malloc 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 true 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.130 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.130 [2024-12-05 19:30:42.395156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:49.130 [2024-12-05 19:30:42.395239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.130 [2024-12-05 19:30:42.395265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:49.130 [2024-12-05 19:30:42.395292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.131 [2024-12-05 19:30:42.398434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.131 [2024-12-05 19:30:42.398498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:49.131 BaseBdev2 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.131 [2024-12-05 19:30:42.403353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.131 [2024-12-05 19:30:42.405869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.131 [2024-12-05 19:30:42.406131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.131 [2024-12-05 19:30:42.406156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:49.131 [2024-12-05 19:30:42.406427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:49.131 [2024-12-05 19:30:42.406627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.131 [2024-12-05 19:30:42.406649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:49.131 [2024-12-05 19:30:42.406887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.131 "name": "raid_bdev1", 00:10:49.131 "uuid": "187e91f4-bb8d-4e8d-856f-942d64974d3e", 00:10:49.131 "strip_size_kb": 64, 00:10:49.131 "state": "online", 00:10:49.131 "raid_level": "raid0", 00:10:49.131 "superblock": true, 00:10:49.131 "num_base_bdevs": 2, 00:10:49.131 "num_base_bdevs_discovered": 2, 00:10:49.131 "num_base_bdevs_operational": 2, 00:10:49.131 "base_bdevs_list": [ 00:10:49.131 { 00:10:49.131 "name": "BaseBdev1", 00:10:49.131 "uuid": "8a3904e1-3135-5e22-9eb2-aace65056245", 00:10:49.131 "is_configured": true, 00:10:49.131 "data_offset": 2048, 00:10:49.131 "data_size": 63488 00:10:49.131 }, 00:10:49.131 { 00:10:49.131 "name": "BaseBdev2", 00:10:49.131 "uuid": "3cfd32ff-e36c-577b-bece-d97469bbc445", 00:10:49.131 "is_configured": true, 00:10:49.131 "data_offset": 2048, 00:10:49.131 "data_size": 63488 00:10:49.131 } 00:10:49.131 ] 00:10:49.131 }' 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.131 19:30:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.699 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.699 19:30:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.699 [2024-12-05 19:30:43.029091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.636 "name": "raid_bdev1", 00:10:50.636 "uuid": "187e91f4-bb8d-4e8d-856f-942d64974d3e", 00:10:50.636 "strip_size_kb": 64, 00:10:50.636 "state": "online", 00:10:50.636 "raid_level": "raid0", 00:10:50.636 "superblock": true, 00:10:50.636 "num_base_bdevs": 2, 00:10:50.636 "num_base_bdevs_discovered": 2, 00:10:50.636 "num_base_bdevs_operational": 2, 00:10:50.636 "base_bdevs_list": [ 00:10:50.636 { 00:10:50.636 "name": "BaseBdev1", 00:10:50.636 "uuid": "8a3904e1-3135-5e22-9eb2-aace65056245", 00:10:50.636 "is_configured": true, 00:10:50.636 "data_offset": 2048, 00:10:50.636 "data_size": 63488 00:10:50.636 }, 00:10:50.636 { 00:10:50.636 "name": "BaseBdev2", 00:10:50.636 "uuid": "3cfd32ff-e36c-577b-bece-d97469bbc445", 00:10:50.636 "is_configured": true, 00:10:50.636 "data_offset": 2048, 00:10:50.636 "data_size": 63488 00:10:50.636 } 00:10:50.636 ] 00:10:50.636 }' 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.636 19:30:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.203 [2024-12-05 19:30:44.484940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.203 [2024-12-05 19:30:44.484994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.203 [2024-12-05 19:30:44.488590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.203 { 00:10:51.203 "results": [ 00:10:51.203 { 00:10:51.203 "job": "raid_bdev1", 00:10:51.203 "core_mask": "0x1", 00:10:51.203 "workload": "randrw", 00:10:51.203 "percentage": 50, 00:10:51.203 "status": "finished", 00:10:51.203 "queue_depth": 1, 00:10:51.203 "io_size": 131072, 00:10:51.203 "runtime": 1.453463, 00:10:51.203 "iops": 10097.264257844885, 00:10:51.203 "mibps": 1262.1580322306106, 00:10:51.203 "io_failed": 1, 00:10:51.203 "io_timeout": 0, 00:10:51.203 "avg_latency_us": 138.42187516646328, 00:10:51.203 "min_latency_us": 39.09818181818182, 00:10:51.203 "max_latency_us": 1683.0836363636363 00:10:51.203 } 00:10:51.203 ], 00:10:51.203 "core_count": 1 00:10:51.203 } 00:10:51.203 [2024-12-05 19:30:44.488810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.203 [2024-12-05 19:30:44.488872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.203 [2024-12-05 19:30:44.488894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61347 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61347 ']' 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61347 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61347 00:10:51.203 killing process with pid 61347 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61347' 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61347 00:10:51.203 [2024-12-05 19:30:44.526052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.203 19:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61347 00:10:51.463 [2024-12-05 19:30:44.649739] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LZ1Vri7EQ3 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.400 ************************************ 00:10:52.400 END TEST raid_read_error_test 00:10:52.400 ************************************ 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:52.400 00:10:52.400 real 0m4.595s 00:10:52.400 user 0m5.713s 00:10:52.400 sys 0m0.571s 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.400 19:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.400 19:30:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:52.400 19:30:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.400 19:30:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.400 19:30:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.659 ************************************ 00:10:52.659 START TEST raid_write_error_test 00:10:52.659 ************************************ 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fVeULc8MWS 00:10:52.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61494 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61494 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61494 ']' 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.659 19:30:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.659 [2024-12-05 19:30:45.969311] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:52.659 [2024-12-05 19:30:45.969523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:10:52.919 [2024-12-05 19:30:46.157786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.919 [2024-12-05 19:30:46.298713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.179 [2024-12-05 19:30:46.506536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.179 [2024-12-05 19:30:46.506609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.766 19:30:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.766 BaseBdev1_malloc 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.766 true 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.766 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.766 [2024-12-05 19:30:47.026596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.766 [2024-12-05 19:30:47.026697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.766 [2024-12-05 19:30:47.026778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.766 [2024-12-05 19:30:47.026799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.766 [2024-12-05 19:30:47.029607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.766 [2024-12-05 19:30:47.029693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.766 BaseBdev1 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 BaseBdev2_malloc 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 true 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 [2024-12-05 19:30:47.083885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.767 [2024-12-05 19:30:47.084171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.767 [2024-12-05 19:30:47.084212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.767 [2024-12-05 19:30:47.084230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.767 [2024-12-05 19:30:47.087277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.767 [2024-12-05 19:30:47.087493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.767 BaseBdev2 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.767 [2024-12-05 19:30:47.091997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.767 [2024-12-05 19:30:47.094766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.767 [2024-12-05 19:30:47.095116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.767 [2024-12-05 19:30:47.095144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:53.767 [2024-12-05 19:30:47.095527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:53.767 [2024-12-05 19:30:47.095811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.767 [2024-12-05 19:30:47.095835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:53.767 [2024-12-05 19:30:47.096112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.767 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.768 "name": "raid_bdev1", 00:10:53.768 "uuid": "97d83170-7320-43f4-903e-51d0611143ce", 00:10:53.768 "strip_size_kb": 64, 00:10:53.768 "state": "online", 00:10:53.768 "raid_level": "raid0", 00:10:53.768 "superblock": true, 00:10:53.768 "num_base_bdevs": 2, 00:10:53.768 "num_base_bdevs_discovered": 2, 00:10:53.768 "num_base_bdevs_operational": 2, 00:10:53.768 "base_bdevs_list": [ 00:10:53.768 { 00:10:53.768 "name": "BaseBdev1", 00:10:53.768 "uuid": "584f6d51-fadd-57d4-b910-3a1b33a6def5", 00:10:53.768 "is_configured": true, 00:10:53.768 "data_offset": 2048, 00:10:53.768 "data_size": 63488 00:10:53.768 }, 00:10:53.768 { 00:10:53.768 "name": "BaseBdev2", 00:10:53.768 "uuid": "5def2138-03be-5e8f-9e53-ef7849380125", 00:10:53.768 "is_configured": true, 00:10:53.768 "data_offset": 2048, 00:10:53.768 "data_size": 63488 00:10:53.768 } 00:10:53.768 ] 00:10:53.768 }' 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.768 19:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.336 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:54.336 19:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:54.595 [2024-12-05 19:30:47.777919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.531 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.532 "name": "raid_bdev1", 00:10:55.532 "uuid": "97d83170-7320-43f4-903e-51d0611143ce", 00:10:55.532 "strip_size_kb": 64, 00:10:55.532 "state": "online", 00:10:55.532 "raid_level": "raid0", 00:10:55.532 "superblock": true, 00:10:55.532 "num_base_bdevs": 2, 00:10:55.532 "num_base_bdevs_discovered": 2, 00:10:55.532 "num_base_bdevs_operational": 2, 00:10:55.532 "base_bdevs_list": [ 00:10:55.532 { 00:10:55.532 "name": "BaseBdev1", 00:10:55.532 "uuid": "584f6d51-fadd-57d4-b910-3a1b33a6def5", 00:10:55.532 "is_configured": true, 00:10:55.532 "data_offset": 2048, 00:10:55.532 "data_size": 63488 00:10:55.532 }, 00:10:55.532 { 00:10:55.532 "name": "BaseBdev2", 00:10:55.532 "uuid": "5def2138-03be-5e8f-9e53-ef7849380125", 00:10:55.532 "is_configured": true, 00:10:55.532 "data_offset": 2048, 00:10:55.532 "data_size": 63488 00:10:55.532 } 00:10:55.532 ] 00:10:55.532 }' 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.532 19:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.792 [2024-12-05 19:30:49.168439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.792 [2024-12-05 19:30:49.168490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.792 [2024-12-05 19:30:49.173095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.792 [2024-12-05 19:30:49.173384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.792 [2024-12-05 19:30:49.173630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.792 [2024-12-05 19:30:49.173830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:55.792 { 00:10:55.792 "results": [ 00:10:55.792 { 00:10:55.792 "job": "raid_bdev1", 00:10:55.792 "core_mask": "0x1", 00:10:55.792 "workload": "randrw", 00:10:55.792 "percentage": 50, 00:10:55.792 "status": "finished", 00:10:55.792 "queue_depth": 1, 00:10:55.792 "io_size": 131072, 00:10:55.792 "runtime": 1.387955, 00:10:55.792 "iops": 9836.774247003685, 00:10:55.792 "mibps": 1229.5967808754606, 00:10:55.792 "io_failed": 1, 00:10:55.792 "io_timeout": 0, 00:10:55.792 "avg_latency_us": 141.62977482456023, 00:10:55.792 "min_latency_us": 38.63272727272727, 00:10:55.792 "max_latency_us": 1824.581818181818 00:10:55.792 } 00:10:55.792 ], 00:10:55.792 "core_count": 1 00:10:55.792 } 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61494 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61494 ']' 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61494 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61494 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61494' 00:10:55.792 killing process with pid 61494 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61494 00:10:55.792 [2024-12-05 19:30:49.214571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.792 19:30:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61494 00:10:56.051 [2024-12-05 19:30:49.365894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fVeULc8MWS 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.429 ************************************ 00:10:57.429 END TEST raid_write_error_test 00:10:57.429 ************************************ 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:57.429 00:10:57.429 real 0m4.647s 00:10:57.429 user 0m5.865s 00:10:57.429 sys 0m0.544s 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.429 19:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 19:30:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:57.429 19:30:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:57.429 19:30:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.429 19:30:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.429 19:30:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 ************************************ 00:10:57.429 START TEST raid_state_function_test 00:10:57.429 ************************************ 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61637 00:10:57.429 Process raid pid: 61637 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61637' 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61637 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61637 ']' 00:10:57.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.429 19:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.429 [2024-12-05 19:30:50.658225] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:10:57.429 [2024-12-05 19:30:50.658403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.429 [2024-12-05 19:30:50.851857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.688 [2024-12-05 19:30:51.009999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.946 [2024-12-05 19:30:51.249811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.946 [2024-12-05 19:30:51.250102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.559 [2024-12-05 19:30:51.698386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.559 [2024-12-05 19:30:51.698457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.559 [2024-12-05 19:30:51.698475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.559 [2024-12-05 19:30:51.698492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.559 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.559 "name": "Existed_Raid", 00:10:58.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.559 "strip_size_kb": 64, 00:10:58.559 "state": "configuring", 00:10:58.559 "raid_level": "concat", 00:10:58.559 "superblock": false, 00:10:58.559 "num_base_bdevs": 2, 00:10:58.559 "num_base_bdevs_discovered": 0, 00:10:58.559 "num_base_bdevs_operational": 2, 00:10:58.559 "base_bdevs_list": [ 00:10:58.559 { 00:10:58.559 "name": "BaseBdev1", 00:10:58.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.559 "is_configured": false, 00:10:58.559 "data_offset": 0, 00:10:58.559 "data_size": 0 00:10:58.559 }, 00:10:58.559 { 00:10:58.559 "name": "BaseBdev2", 00:10:58.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.560 "is_configured": false, 00:10:58.560 "data_offset": 0, 00:10:58.560 "data_size": 0 00:10:58.560 } 00:10:58.560 ] 00:10:58.560 }' 00:10:58.560 19:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.560 19:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.818 [2024-12-05 19:30:52.218468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.818 [2024-12-05 19:30:52.218512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.818 [2024-12-05 19:30:52.226443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.818 [2024-12-05 19:30:52.226497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.818 [2024-12-05 19:30:52.226513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.818 [2024-12-05 19:30:52.226532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.818 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.078 [2024-12-05 19:30:52.271876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.078 BaseBdev1 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.078 [ 00:10:59.078 { 00:10:59.078 "name": "BaseBdev1", 00:10:59.078 "aliases": [ 00:10:59.078 "5baa7242-cf47-40f9-92cf-a21550daa635" 00:10:59.078 ], 00:10:59.078 "product_name": "Malloc disk", 00:10:59.078 "block_size": 512, 00:10:59.078 "num_blocks": 65536, 00:10:59.078 "uuid": "5baa7242-cf47-40f9-92cf-a21550daa635", 00:10:59.078 "assigned_rate_limits": { 00:10:59.078 "rw_ios_per_sec": 0, 00:10:59.078 "rw_mbytes_per_sec": 0, 00:10:59.078 "r_mbytes_per_sec": 0, 00:10:59.078 "w_mbytes_per_sec": 0 00:10:59.078 }, 00:10:59.078 "claimed": true, 00:10:59.078 "claim_type": "exclusive_write", 00:10:59.078 "zoned": false, 00:10:59.078 "supported_io_types": { 00:10:59.078 "read": true, 00:10:59.078 "write": true, 00:10:59.078 "unmap": true, 00:10:59.078 "flush": true, 00:10:59.078 "reset": true, 00:10:59.078 "nvme_admin": false, 00:10:59.078 "nvme_io": false, 00:10:59.078 "nvme_io_md": false, 00:10:59.078 "write_zeroes": true, 00:10:59.078 "zcopy": true, 00:10:59.078 "get_zone_info": false, 00:10:59.078 "zone_management": false, 00:10:59.078 "zone_append": false, 00:10:59.078 "compare": false, 00:10:59.078 "compare_and_write": false, 00:10:59.078 "abort": true, 00:10:59.078 "seek_hole": false, 00:10:59.078 "seek_data": false, 00:10:59.078 "copy": true, 00:10:59.078 "nvme_iov_md": false 00:10:59.078 }, 00:10:59.078 "memory_domains": [ 00:10:59.078 { 00:10:59.078 "dma_device_id": "system", 00:10:59.078 "dma_device_type": 1 00:10:59.078 }, 00:10:59.078 { 00:10:59.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.078 "dma_device_type": 2 00:10:59.078 } 00:10:59.078 ], 00:10:59.078 "driver_specific": {} 00:10:59.078 } 00:10:59.078 ] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.078 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.078 "name": "Existed_Raid", 00:10:59.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.078 "strip_size_kb": 64, 00:10:59.078 "state": "configuring", 00:10:59.078 "raid_level": "concat", 00:10:59.078 "superblock": false, 00:10:59.078 "num_base_bdevs": 2, 00:10:59.078 "num_base_bdevs_discovered": 1, 00:10:59.078 "num_base_bdevs_operational": 2, 00:10:59.078 "base_bdevs_list": [ 00:10:59.078 { 00:10:59.078 "name": "BaseBdev1", 00:10:59.078 "uuid": "5baa7242-cf47-40f9-92cf-a21550daa635", 00:10:59.078 "is_configured": true, 00:10:59.079 "data_offset": 0, 00:10:59.079 "data_size": 65536 00:10:59.079 }, 00:10:59.079 { 00:10:59.079 "name": "BaseBdev2", 00:10:59.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.079 "is_configured": false, 00:10:59.079 "data_offset": 0, 00:10:59.079 "data_size": 0 00:10:59.079 } 00:10:59.079 ] 00:10:59.079 }' 00:10:59.079 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.079 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.647 [2024-12-05 19:30:52.820091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.647 [2024-12-05 19:30:52.820284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.647 [2024-12-05 19:30:52.828116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.647 [2024-12-05 19:30:52.830520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.647 [2024-12-05 19:30:52.830578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.647 "name": "Existed_Raid", 00:10:59.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.647 "strip_size_kb": 64, 00:10:59.647 "state": "configuring", 00:10:59.647 "raid_level": "concat", 00:10:59.647 "superblock": false, 00:10:59.647 "num_base_bdevs": 2, 00:10:59.647 "num_base_bdevs_discovered": 1, 00:10:59.647 "num_base_bdevs_operational": 2, 00:10:59.647 "base_bdevs_list": [ 00:10:59.647 { 00:10:59.647 "name": "BaseBdev1", 00:10:59.647 "uuid": "5baa7242-cf47-40f9-92cf-a21550daa635", 00:10:59.647 "is_configured": true, 00:10:59.647 "data_offset": 0, 00:10:59.647 "data_size": 65536 00:10:59.647 }, 00:10:59.647 { 00:10:59.647 "name": "BaseBdev2", 00:10:59.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.647 "is_configured": false, 00:10:59.647 "data_offset": 0, 00:10:59.647 "data_size": 0 00:10:59.647 } 00:10:59.647 ] 00:10:59.647 }' 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.647 19:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.907 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.907 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.907 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 [2024-12-05 19:30:53.371201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.165 [2024-12-05 19:30:53.371265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.165 [2024-12-05 19:30:53.371279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:00.165 [2024-12-05 19:30:53.371625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:00.165 [2024-12-05 19:30:53.371875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.165 [2024-12-05 19:30:53.371897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:00.165 [2024-12-05 19:30:53.372215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.165 BaseBdev2 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 [ 00:11:00.165 { 00:11:00.165 "name": "BaseBdev2", 00:11:00.165 "aliases": [ 00:11:00.165 "b49af818-9169-4673-ad78-c739f19e933b" 00:11:00.165 ], 00:11:00.165 "product_name": "Malloc disk", 00:11:00.165 "block_size": 512, 00:11:00.165 "num_blocks": 65536, 00:11:00.165 "uuid": "b49af818-9169-4673-ad78-c739f19e933b", 00:11:00.165 "assigned_rate_limits": { 00:11:00.165 "rw_ios_per_sec": 0, 00:11:00.165 "rw_mbytes_per_sec": 0, 00:11:00.165 "r_mbytes_per_sec": 0, 00:11:00.165 "w_mbytes_per_sec": 0 00:11:00.165 }, 00:11:00.165 "claimed": true, 00:11:00.165 "claim_type": "exclusive_write", 00:11:00.165 "zoned": false, 00:11:00.165 "supported_io_types": { 00:11:00.165 "read": true, 00:11:00.165 "write": true, 00:11:00.165 "unmap": true, 00:11:00.165 "flush": true, 00:11:00.165 "reset": true, 00:11:00.165 "nvme_admin": false, 00:11:00.165 "nvme_io": false, 00:11:00.165 "nvme_io_md": false, 00:11:00.165 "write_zeroes": true, 00:11:00.165 "zcopy": true, 00:11:00.165 "get_zone_info": false, 00:11:00.165 "zone_management": false, 00:11:00.165 "zone_append": false, 00:11:00.165 "compare": false, 00:11:00.165 "compare_and_write": false, 00:11:00.165 "abort": true, 00:11:00.165 "seek_hole": false, 00:11:00.165 "seek_data": false, 00:11:00.165 "copy": true, 00:11:00.165 "nvme_iov_md": false 00:11:00.165 }, 00:11:00.165 "memory_domains": [ 00:11:00.165 { 00:11:00.165 "dma_device_id": "system", 00:11:00.165 "dma_device_type": 1 00:11:00.165 }, 00:11:00.165 { 00:11:00.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.165 "dma_device_type": 2 00:11:00.165 } 00:11:00.165 ], 00:11:00.165 "driver_specific": {} 00:11:00.165 } 00:11:00.165 ] 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.165 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.166 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.166 "name": "Existed_Raid", 00:11:00.166 "uuid": "3e2275d9-f28d-4199-be6c-728b23774027", 00:11:00.166 "strip_size_kb": 64, 00:11:00.166 "state": "online", 00:11:00.166 "raid_level": "concat", 00:11:00.166 "superblock": false, 00:11:00.166 "num_base_bdevs": 2, 00:11:00.166 "num_base_bdevs_discovered": 2, 00:11:00.166 "num_base_bdevs_operational": 2, 00:11:00.166 "base_bdevs_list": [ 00:11:00.166 { 00:11:00.166 "name": "BaseBdev1", 00:11:00.166 "uuid": "5baa7242-cf47-40f9-92cf-a21550daa635", 00:11:00.166 "is_configured": true, 00:11:00.166 "data_offset": 0, 00:11:00.166 "data_size": 65536 00:11:00.166 }, 00:11:00.166 { 00:11:00.166 "name": "BaseBdev2", 00:11:00.166 "uuid": "b49af818-9169-4673-ad78-c739f19e933b", 00:11:00.166 "is_configured": true, 00:11:00.166 "data_offset": 0, 00:11:00.166 "data_size": 65536 00:11:00.166 } 00:11:00.166 ] 00:11:00.166 }' 00:11:00.166 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.166 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.730 [2024-12-05 19:30:53.927851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.730 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.730 "name": "Existed_Raid", 00:11:00.730 "aliases": [ 00:11:00.730 "3e2275d9-f28d-4199-be6c-728b23774027" 00:11:00.730 ], 00:11:00.731 "product_name": "Raid Volume", 00:11:00.731 "block_size": 512, 00:11:00.731 "num_blocks": 131072, 00:11:00.731 "uuid": "3e2275d9-f28d-4199-be6c-728b23774027", 00:11:00.731 "assigned_rate_limits": { 00:11:00.731 "rw_ios_per_sec": 0, 00:11:00.731 "rw_mbytes_per_sec": 0, 00:11:00.731 "r_mbytes_per_sec": 0, 00:11:00.731 "w_mbytes_per_sec": 0 00:11:00.731 }, 00:11:00.731 "claimed": false, 00:11:00.731 "zoned": false, 00:11:00.731 "supported_io_types": { 00:11:00.731 "read": true, 00:11:00.731 "write": true, 00:11:00.731 "unmap": true, 00:11:00.731 "flush": true, 00:11:00.731 "reset": true, 00:11:00.731 "nvme_admin": false, 00:11:00.731 "nvme_io": false, 00:11:00.731 "nvme_io_md": false, 00:11:00.731 "write_zeroes": true, 00:11:00.731 "zcopy": false, 00:11:00.731 "get_zone_info": false, 00:11:00.731 "zone_management": false, 00:11:00.731 "zone_append": false, 00:11:00.731 "compare": false, 00:11:00.731 "compare_and_write": false, 00:11:00.731 "abort": false, 00:11:00.731 "seek_hole": false, 00:11:00.731 "seek_data": false, 00:11:00.731 "copy": false, 00:11:00.731 "nvme_iov_md": false 00:11:00.731 }, 00:11:00.731 "memory_domains": [ 00:11:00.731 { 00:11:00.731 "dma_device_id": "system", 00:11:00.731 "dma_device_type": 1 00:11:00.731 }, 00:11:00.731 { 00:11:00.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.731 "dma_device_type": 2 00:11:00.731 }, 00:11:00.731 { 00:11:00.731 "dma_device_id": "system", 00:11:00.731 "dma_device_type": 1 00:11:00.731 }, 00:11:00.731 { 00:11:00.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.731 "dma_device_type": 2 00:11:00.731 } 00:11:00.731 ], 00:11:00.731 "driver_specific": { 00:11:00.731 "raid": { 00:11:00.731 "uuid": "3e2275d9-f28d-4199-be6c-728b23774027", 00:11:00.731 "strip_size_kb": 64, 00:11:00.731 "state": "online", 00:11:00.731 "raid_level": "concat", 00:11:00.731 "superblock": false, 00:11:00.731 "num_base_bdevs": 2, 00:11:00.731 "num_base_bdevs_discovered": 2, 00:11:00.731 "num_base_bdevs_operational": 2, 00:11:00.731 "base_bdevs_list": [ 00:11:00.731 { 00:11:00.731 "name": "BaseBdev1", 00:11:00.731 "uuid": "5baa7242-cf47-40f9-92cf-a21550daa635", 00:11:00.731 "is_configured": true, 00:11:00.731 "data_offset": 0, 00:11:00.731 "data_size": 65536 00:11:00.731 }, 00:11:00.731 { 00:11:00.731 "name": "BaseBdev2", 00:11:00.731 "uuid": "b49af818-9169-4673-ad78-c739f19e933b", 00:11:00.731 "is_configured": true, 00:11:00.731 "data_offset": 0, 00:11:00.731 "data_size": 65536 00:11:00.731 } 00:11:00.731 ] 00:11:00.731 } 00:11:00.731 } 00:11:00.731 }' 00:11:00.731 19:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:00.731 BaseBdev2' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.731 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.989 [2024-12-05 19:30:54.195640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.989 [2024-12-05 19:30:54.195689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.989 [2024-12-05 19:30:54.195780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.989 "name": "Existed_Raid", 00:11:00.989 "uuid": "3e2275d9-f28d-4199-be6c-728b23774027", 00:11:00.989 "strip_size_kb": 64, 00:11:00.989 "state": "offline", 00:11:00.989 "raid_level": "concat", 00:11:00.989 "superblock": false, 00:11:00.989 "num_base_bdevs": 2, 00:11:00.989 "num_base_bdevs_discovered": 1, 00:11:00.989 "num_base_bdevs_operational": 1, 00:11:00.989 "base_bdevs_list": [ 00:11:00.989 { 00:11:00.989 "name": null, 00:11:00.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.989 "is_configured": false, 00:11:00.989 "data_offset": 0, 00:11:00.989 "data_size": 65536 00:11:00.989 }, 00:11:00.989 { 00:11:00.989 "name": "BaseBdev2", 00:11:00.989 "uuid": "b49af818-9169-4673-ad78-c739f19e933b", 00:11:00.989 "is_configured": true, 00:11:00.989 "data_offset": 0, 00:11:00.989 "data_size": 65536 00:11:00.989 } 00:11:00.989 ] 00:11:00.989 }' 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.989 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.556 [2024-12-05 19:30:54.883317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.556 [2024-12-05 19:30:54.883391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.556 19:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61637 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61637 ']' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61637 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61637 00:11:01.815 killing process with pid 61637 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61637' 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61637 00:11:01.815 19:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61637 00:11:01.815 [2024-12-05 19:30:55.070835] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.815 [2024-12-05 19:30:55.086909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.208 ************************************ 00:11:03.208 END TEST raid_state_function_test 00:11:03.208 ************************************ 00:11:03.208 19:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.208 00:11:03.208 real 0m5.680s 00:11:03.208 user 0m8.505s 00:11:03.208 sys 0m0.824s 00:11:03.208 19:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.208 19:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.208 19:30:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:03.208 19:30:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.208 19:30:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.208 19:30:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.208 ************************************ 00:11:03.208 START TEST raid_state_function_test_sb 00:11:03.208 ************************************ 00:11:03.208 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:11:03.208 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.209 Process raid pid: 61896 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61896 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61896' 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61896 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61896 ']' 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.209 19:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.209 [2024-12-05 19:30:56.402572] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:03.209 [2024-12-05 19:30:56.403088] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.209 [2024-12-05 19:30:56.600106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.467 [2024-12-05 19:30:56.790579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.752 [2024-12-05 19:30:57.082102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.752 [2024-12-05 19:30:57.082436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.343 [2024-12-05 19:30:57.480044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.343 [2024-12-05 19:30:57.480134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.343 [2024-12-05 19:30:57.480156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.343 [2024-12-05 19:30:57.480176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.343 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.343 "name": "Existed_Raid", 00:11:04.343 "uuid": "807c4a45-97f1-4e39-9505-507b5b6537b6", 00:11:04.343 "strip_size_kb": 64, 00:11:04.343 "state": "configuring", 00:11:04.343 "raid_level": "concat", 00:11:04.343 "superblock": true, 00:11:04.343 "num_base_bdevs": 2, 00:11:04.343 "num_base_bdevs_discovered": 0, 00:11:04.344 "num_base_bdevs_operational": 2, 00:11:04.344 "base_bdevs_list": [ 00:11:04.344 { 00:11:04.344 "name": "BaseBdev1", 00:11:04.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.344 "is_configured": false, 00:11:04.344 "data_offset": 0, 00:11:04.344 "data_size": 0 00:11:04.344 }, 00:11:04.344 { 00:11:04.344 "name": "BaseBdev2", 00:11:04.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.344 "is_configured": false, 00:11:04.344 "data_offset": 0, 00:11:04.344 "data_size": 0 00:11:04.344 } 00:11:04.344 ] 00:11:04.344 }' 00:11:04.344 19:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.344 19:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.601 [2024-12-05 19:30:58.008063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.601 [2024-12-05 19:30:58.008276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.601 [2024-12-05 19:30:58.016036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.601 [2024-12-05 19:30:58.016118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.601 [2024-12-05 19:30:58.016145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.601 [2024-12-05 19:30:58.016170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.601 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.858 [2024-12-05 19:30:58.062874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.858 BaseBdev1 00:11:04.858 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.859 [ 00:11:04.859 { 00:11:04.859 "name": "BaseBdev1", 00:11:04.859 "aliases": [ 00:11:04.859 "3725c8cf-de50-4b40-be16-e7d791ca3150" 00:11:04.859 ], 00:11:04.859 "product_name": "Malloc disk", 00:11:04.859 "block_size": 512, 00:11:04.859 "num_blocks": 65536, 00:11:04.859 "uuid": "3725c8cf-de50-4b40-be16-e7d791ca3150", 00:11:04.859 "assigned_rate_limits": { 00:11:04.859 "rw_ios_per_sec": 0, 00:11:04.859 "rw_mbytes_per_sec": 0, 00:11:04.859 "r_mbytes_per_sec": 0, 00:11:04.859 "w_mbytes_per_sec": 0 00:11:04.859 }, 00:11:04.859 "claimed": true, 00:11:04.859 "claim_type": "exclusive_write", 00:11:04.859 "zoned": false, 00:11:04.859 "supported_io_types": { 00:11:04.859 "read": true, 00:11:04.859 "write": true, 00:11:04.859 "unmap": true, 00:11:04.859 "flush": true, 00:11:04.859 "reset": true, 00:11:04.859 "nvme_admin": false, 00:11:04.859 "nvme_io": false, 00:11:04.859 "nvme_io_md": false, 00:11:04.859 "write_zeroes": true, 00:11:04.859 "zcopy": true, 00:11:04.859 "get_zone_info": false, 00:11:04.859 "zone_management": false, 00:11:04.859 "zone_append": false, 00:11:04.859 "compare": false, 00:11:04.859 "compare_and_write": false, 00:11:04.859 "abort": true, 00:11:04.859 "seek_hole": false, 00:11:04.859 "seek_data": false, 00:11:04.859 "copy": true, 00:11:04.859 "nvme_iov_md": false 00:11:04.859 }, 00:11:04.859 "memory_domains": [ 00:11:04.859 { 00:11:04.859 "dma_device_id": "system", 00:11:04.859 "dma_device_type": 1 00:11:04.859 }, 00:11:04.859 { 00:11:04.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.859 "dma_device_type": 2 00:11:04.859 } 00:11:04.859 ], 00:11:04.859 "driver_specific": {} 00:11:04.859 } 00:11:04.859 ] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.859 "name": "Existed_Raid", 00:11:04.859 "uuid": "1bc737ce-585e-4d36-9139-440ea51fd2c8", 00:11:04.859 "strip_size_kb": 64, 00:11:04.859 "state": "configuring", 00:11:04.859 "raid_level": "concat", 00:11:04.859 "superblock": true, 00:11:04.859 "num_base_bdevs": 2, 00:11:04.859 "num_base_bdevs_discovered": 1, 00:11:04.859 "num_base_bdevs_operational": 2, 00:11:04.859 "base_bdevs_list": [ 00:11:04.859 { 00:11:04.859 "name": "BaseBdev1", 00:11:04.859 "uuid": "3725c8cf-de50-4b40-be16-e7d791ca3150", 00:11:04.859 "is_configured": true, 00:11:04.859 "data_offset": 2048, 00:11:04.859 "data_size": 63488 00:11:04.859 }, 00:11:04.859 { 00:11:04.859 "name": "BaseBdev2", 00:11:04.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.859 "is_configured": false, 00:11:04.859 "data_offset": 0, 00:11:04.859 "data_size": 0 00:11:04.859 } 00:11:04.859 ] 00:11:04.859 }' 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.859 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.424 [2024-12-05 19:30:58.635115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.424 [2024-12-05 19:30:58.635187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.424 [2024-12-05 19:30:58.643192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.424 [2024-12-05 19:30:58.646039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.424 [2024-12-05 19:30:58.646247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.424 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.425 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.425 "name": "Existed_Raid", 00:11:05.425 "uuid": "c98f483d-63f5-42f6-abac-6710fa2a921e", 00:11:05.425 "strip_size_kb": 64, 00:11:05.425 "state": "configuring", 00:11:05.425 "raid_level": "concat", 00:11:05.425 "superblock": true, 00:11:05.425 "num_base_bdevs": 2, 00:11:05.425 "num_base_bdevs_discovered": 1, 00:11:05.425 "num_base_bdevs_operational": 2, 00:11:05.425 "base_bdevs_list": [ 00:11:05.425 { 00:11:05.425 "name": "BaseBdev1", 00:11:05.425 "uuid": "3725c8cf-de50-4b40-be16-e7d791ca3150", 00:11:05.425 "is_configured": true, 00:11:05.425 "data_offset": 2048, 00:11:05.425 "data_size": 63488 00:11:05.425 }, 00:11:05.425 { 00:11:05.425 "name": "BaseBdev2", 00:11:05.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.425 "is_configured": false, 00:11:05.425 "data_offset": 0, 00:11:05.425 "data_size": 0 00:11:05.425 } 00:11:05.425 ] 00:11:05.425 }' 00:11:05.425 19:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.425 19:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.991 [2024-12-05 19:30:59.228317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.991 [2024-12-05 19:30:59.229308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.991 [2024-12-05 19:30:59.229340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:05.991 BaseBdev2 00:11:05.991 [2024-12-05 19:30:59.229743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:05.991 [2024-12-05 19:30:59.229999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.991 [2024-12-05 19:30:59.230043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:05.991 [2024-12-05 19:30:59.230228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.991 [ 00:11:05.991 { 00:11:05.991 "name": "BaseBdev2", 00:11:05.991 "aliases": [ 00:11:05.991 "84298a4e-b520-4680-9d65-d100345b0f0b" 00:11:05.991 ], 00:11:05.991 "product_name": "Malloc disk", 00:11:05.991 "block_size": 512, 00:11:05.991 "num_blocks": 65536, 00:11:05.991 "uuid": "84298a4e-b520-4680-9d65-d100345b0f0b", 00:11:05.991 "assigned_rate_limits": { 00:11:05.991 "rw_ios_per_sec": 0, 00:11:05.991 "rw_mbytes_per_sec": 0, 00:11:05.991 "r_mbytes_per_sec": 0, 00:11:05.991 "w_mbytes_per_sec": 0 00:11:05.991 }, 00:11:05.991 "claimed": true, 00:11:05.991 "claim_type": "exclusive_write", 00:11:05.991 "zoned": false, 00:11:05.991 "supported_io_types": { 00:11:05.991 "read": true, 00:11:05.991 "write": true, 00:11:05.991 "unmap": true, 00:11:05.991 "flush": true, 00:11:05.991 "reset": true, 00:11:05.991 "nvme_admin": false, 00:11:05.991 "nvme_io": false, 00:11:05.991 "nvme_io_md": false, 00:11:05.991 "write_zeroes": true, 00:11:05.991 "zcopy": true, 00:11:05.991 "get_zone_info": false, 00:11:05.991 "zone_management": false, 00:11:05.991 "zone_append": false, 00:11:05.991 "compare": false, 00:11:05.991 "compare_and_write": false, 00:11:05.991 "abort": true, 00:11:05.991 "seek_hole": false, 00:11:05.991 "seek_data": false, 00:11:05.991 "copy": true, 00:11:05.991 "nvme_iov_md": false 00:11:05.991 }, 00:11:05.991 "memory_domains": [ 00:11:05.991 { 00:11:05.991 "dma_device_id": "system", 00:11:05.991 "dma_device_type": 1 00:11:05.991 }, 00:11:05.991 { 00:11:05.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.991 "dma_device_type": 2 00:11:05.991 } 00:11:05.991 ], 00:11:05.991 "driver_specific": {} 00:11:05.991 } 00:11:05.991 ] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.991 "name": "Existed_Raid", 00:11:05.991 "uuid": "c98f483d-63f5-42f6-abac-6710fa2a921e", 00:11:05.991 "strip_size_kb": 64, 00:11:05.991 "state": "online", 00:11:05.991 "raid_level": "concat", 00:11:05.991 "superblock": true, 00:11:05.991 "num_base_bdevs": 2, 00:11:05.991 "num_base_bdevs_discovered": 2, 00:11:05.991 "num_base_bdevs_operational": 2, 00:11:05.991 "base_bdevs_list": [ 00:11:05.991 { 00:11:05.991 "name": "BaseBdev1", 00:11:05.991 "uuid": "3725c8cf-de50-4b40-be16-e7d791ca3150", 00:11:05.991 "is_configured": true, 00:11:05.991 "data_offset": 2048, 00:11:05.991 "data_size": 63488 00:11:05.991 }, 00:11:05.991 { 00:11:05.991 "name": "BaseBdev2", 00:11:05.991 "uuid": "84298a4e-b520-4680-9d65-d100345b0f0b", 00:11:05.991 "is_configured": true, 00:11:05.991 "data_offset": 2048, 00:11:05.991 "data_size": 63488 00:11:05.991 } 00:11:05.991 ] 00:11:05.991 }' 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.991 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.558 [2024-12-05 19:30:59.804913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.558 "name": "Existed_Raid", 00:11:06.558 "aliases": [ 00:11:06.558 "c98f483d-63f5-42f6-abac-6710fa2a921e" 00:11:06.558 ], 00:11:06.558 "product_name": "Raid Volume", 00:11:06.558 "block_size": 512, 00:11:06.558 "num_blocks": 126976, 00:11:06.558 "uuid": "c98f483d-63f5-42f6-abac-6710fa2a921e", 00:11:06.558 "assigned_rate_limits": { 00:11:06.558 "rw_ios_per_sec": 0, 00:11:06.558 "rw_mbytes_per_sec": 0, 00:11:06.558 "r_mbytes_per_sec": 0, 00:11:06.558 "w_mbytes_per_sec": 0 00:11:06.558 }, 00:11:06.558 "claimed": false, 00:11:06.558 "zoned": false, 00:11:06.558 "supported_io_types": { 00:11:06.558 "read": true, 00:11:06.558 "write": true, 00:11:06.558 "unmap": true, 00:11:06.558 "flush": true, 00:11:06.558 "reset": true, 00:11:06.558 "nvme_admin": false, 00:11:06.558 "nvme_io": false, 00:11:06.558 "nvme_io_md": false, 00:11:06.558 "write_zeroes": true, 00:11:06.558 "zcopy": false, 00:11:06.558 "get_zone_info": false, 00:11:06.558 "zone_management": false, 00:11:06.558 "zone_append": false, 00:11:06.558 "compare": false, 00:11:06.558 "compare_and_write": false, 00:11:06.558 "abort": false, 00:11:06.558 "seek_hole": false, 00:11:06.558 "seek_data": false, 00:11:06.558 "copy": false, 00:11:06.558 "nvme_iov_md": false 00:11:06.558 }, 00:11:06.558 "memory_domains": [ 00:11:06.558 { 00:11:06.558 "dma_device_id": "system", 00:11:06.558 "dma_device_type": 1 00:11:06.558 }, 00:11:06.558 { 00:11:06.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.558 "dma_device_type": 2 00:11:06.558 }, 00:11:06.558 { 00:11:06.558 "dma_device_id": "system", 00:11:06.558 "dma_device_type": 1 00:11:06.558 }, 00:11:06.558 { 00:11:06.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.558 "dma_device_type": 2 00:11:06.558 } 00:11:06.558 ], 00:11:06.558 "driver_specific": { 00:11:06.558 "raid": { 00:11:06.558 "uuid": "c98f483d-63f5-42f6-abac-6710fa2a921e", 00:11:06.558 "strip_size_kb": 64, 00:11:06.558 "state": "online", 00:11:06.558 "raid_level": "concat", 00:11:06.558 "superblock": true, 00:11:06.558 "num_base_bdevs": 2, 00:11:06.558 "num_base_bdevs_discovered": 2, 00:11:06.558 "num_base_bdevs_operational": 2, 00:11:06.558 "base_bdevs_list": [ 00:11:06.558 { 00:11:06.558 "name": "BaseBdev1", 00:11:06.558 "uuid": "3725c8cf-de50-4b40-be16-e7d791ca3150", 00:11:06.558 "is_configured": true, 00:11:06.558 "data_offset": 2048, 00:11:06.558 "data_size": 63488 00:11:06.558 }, 00:11:06.558 { 00:11:06.558 "name": "BaseBdev2", 00:11:06.558 "uuid": "84298a4e-b520-4680-9d65-d100345b0f0b", 00:11:06.558 "is_configured": true, 00:11:06.558 "data_offset": 2048, 00:11:06.558 "data_size": 63488 00:11:06.558 } 00:11:06.558 ] 00:11:06.558 } 00:11:06.558 } 00:11:06.558 }' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:06.558 BaseBdev2' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.558 19:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 [2024-12-05 19:31:00.068642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.817 [2024-12-05 19:31:00.068865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.817 [2024-12-05 19:31:00.068958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.817 "name": "Existed_Raid", 00:11:06.817 "uuid": "c98f483d-63f5-42f6-abac-6710fa2a921e", 00:11:06.817 "strip_size_kb": 64, 00:11:06.817 "state": "offline", 00:11:06.817 "raid_level": "concat", 00:11:06.817 "superblock": true, 00:11:06.817 "num_base_bdevs": 2, 00:11:06.817 "num_base_bdevs_discovered": 1, 00:11:06.817 "num_base_bdevs_operational": 1, 00:11:06.817 "base_bdevs_list": [ 00:11:06.817 { 00:11:06.817 "name": null, 00:11:06.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.817 "is_configured": false, 00:11:06.817 "data_offset": 0, 00:11:06.817 "data_size": 63488 00:11:06.817 }, 00:11:06.817 { 00:11:06.817 "name": "BaseBdev2", 00:11:06.817 "uuid": "84298a4e-b520-4680-9d65-d100345b0f0b", 00:11:06.817 "is_configured": true, 00:11:06.817 "data_offset": 2048, 00:11:06.817 "data_size": 63488 00:11:06.817 } 00:11:06.817 ] 00:11:06.817 }' 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.817 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.385 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.385 [2024-12-05 19:31:00.753647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.385 [2024-12-05 19:31:00.753732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:07.645 19:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61896 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61896 ']' 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61896 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61896 00:11:07.646 killing process with pid 61896 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61896' 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61896 00:11:07.646 [2024-12-05 19:31:00.929343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.646 19:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61896 00:11:07.646 [2024-12-05 19:31:00.943932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.583 19:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.583 00:11:08.583 real 0m5.726s 00:11:08.583 user 0m8.663s 00:11:08.583 sys 0m0.847s 00:11:08.583 ************************************ 00:11:08.583 END TEST raid_state_function_test_sb 00:11:08.583 ************************************ 00:11:08.583 19:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.583 19:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.841 19:31:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:08.841 19:31:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.841 19:31:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.841 19:31:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.841 ************************************ 00:11:08.841 START TEST raid_superblock_test 00:11:08.841 ************************************ 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:08.841 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62154 00:11:08.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62154 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62154 ']' 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.842 19:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.842 [2024-12-05 19:31:02.170214] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:08.842 [2024-12-05 19:31:02.170410] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62154 ] 00:11:09.140 [2024-12-05 19:31:02.351469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.140 [2024-12-05 19:31:02.484907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.451 [2024-12-05 19:31:02.692261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.451 [2024-12-05 19:31:02.692473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 malloc1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 [2024-12-05 19:31:03.232203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.018 [2024-12-05 19:31:03.232290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.018 [2024-12-05 19:31:03.232331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:10.018 [2024-12-05 19:31:03.232351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.018 [2024-12-05 19:31:03.235625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.018 pt1 00:11:10.018 [2024-12-05 19:31:03.236830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 malloc2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 [2024-12-05 19:31:03.288667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.018 [2024-12-05 19:31:03.288925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.018 [2024-12-05 19:31:03.288986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:10.018 [2024-12-05 19:31:03.289011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.018 [2024-12-05 19:31:03.292192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.018 pt2 00:11:10.018 [2024-12-05 19:31:03.292396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 [2024-12-05 19:31:03.296802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.018 [2024-12-05 19:31:03.299598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.018 [2024-12-05 19:31:03.300035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:10.018 [2024-12-05 19:31:03.300066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:10.018 [2024-12-05 19:31:03.300439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:10.018 [2024-12-05 19:31:03.300681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:10.018 [2024-12-05 19:31:03.300727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:10.018 [2024-12-05 19:31:03.301023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.018 "name": "raid_bdev1", 00:11:10.018 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:10.018 "strip_size_kb": 64, 00:11:10.018 "state": "online", 00:11:10.018 "raid_level": "concat", 00:11:10.018 "superblock": true, 00:11:10.018 "num_base_bdevs": 2, 00:11:10.018 "num_base_bdevs_discovered": 2, 00:11:10.018 "num_base_bdevs_operational": 2, 00:11:10.018 "base_bdevs_list": [ 00:11:10.018 { 00:11:10.018 "name": "pt1", 00:11:10.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.018 "is_configured": true, 00:11:10.018 "data_offset": 2048, 00:11:10.018 "data_size": 63488 00:11:10.018 }, 00:11:10.018 { 00:11:10.018 "name": "pt2", 00:11:10.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.018 "is_configured": true, 00:11:10.018 "data_offset": 2048, 00:11:10.018 "data_size": 63488 00:11:10.018 } 00:11:10.018 ] 00:11:10.018 }' 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.018 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.585 [2024-12-05 19:31:03.825856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.585 "name": "raid_bdev1", 00:11:10.585 "aliases": [ 00:11:10.585 "4fab07e0-89ad-4988-84b0-5dde3ce00fbe" 00:11:10.585 ], 00:11:10.585 "product_name": "Raid Volume", 00:11:10.585 "block_size": 512, 00:11:10.585 "num_blocks": 126976, 00:11:10.585 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:10.585 "assigned_rate_limits": { 00:11:10.585 "rw_ios_per_sec": 0, 00:11:10.585 "rw_mbytes_per_sec": 0, 00:11:10.585 "r_mbytes_per_sec": 0, 00:11:10.585 "w_mbytes_per_sec": 0 00:11:10.585 }, 00:11:10.585 "claimed": false, 00:11:10.585 "zoned": false, 00:11:10.585 "supported_io_types": { 00:11:10.585 "read": true, 00:11:10.585 "write": true, 00:11:10.585 "unmap": true, 00:11:10.585 "flush": true, 00:11:10.585 "reset": true, 00:11:10.585 "nvme_admin": false, 00:11:10.585 "nvme_io": false, 00:11:10.585 "nvme_io_md": false, 00:11:10.585 "write_zeroes": true, 00:11:10.585 "zcopy": false, 00:11:10.585 "get_zone_info": false, 00:11:10.585 "zone_management": false, 00:11:10.585 "zone_append": false, 00:11:10.585 "compare": false, 00:11:10.585 "compare_and_write": false, 00:11:10.585 "abort": false, 00:11:10.585 "seek_hole": false, 00:11:10.585 "seek_data": false, 00:11:10.585 "copy": false, 00:11:10.585 "nvme_iov_md": false 00:11:10.585 }, 00:11:10.585 "memory_domains": [ 00:11:10.585 { 00:11:10.585 "dma_device_id": "system", 00:11:10.585 "dma_device_type": 1 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.585 "dma_device_type": 2 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "dma_device_id": "system", 00:11:10.585 "dma_device_type": 1 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.585 "dma_device_type": 2 00:11:10.585 } 00:11:10.585 ], 00:11:10.585 "driver_specific": { 00:11:10.585 "raid": { 00:11:10.585 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:10.585 "strip_size_kb": 64, 00:11:10.585 "state": "online", 00:11:10.585 "raid_level": "concat", 00:11:10.585 "superblock": true, 00:11:10.585 "num_base_bdevs": 2, 00:11:10.585 "num_base_bdevs_discovered": 2, 00:11:10.585 "num_base_bdevs_operational": 2, 00:11:10.585 "base_bdevs_list": [ 00:11:10.585 { 00:11:10.585 "name": "pt1", 00:11:10.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "name": "pt2", 00:11:10.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 } 00:11:10.585 ] 00:11:10.585 } 00:11:10.585 } 00:11:10.585 }' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:10.585 pt2' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.585 19:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.585 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 [2024-12-05 19:31:04.045541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4fab07e0-89ad-4988-84b0-5dde3ce00fbe 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4fab07e0-89ad-4988-84b0-5dde3ce00fbe ']' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 [2024-12-05 19:31:04.093213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.845 [2024-12-05 19:31:04.093258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.845 [2024-12-05 19:31:04.093387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.845 [2024-12-05 19:31:04.093456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.845 [2024-12-05 19:31:04.093476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 [2024-12-05 19:31:04.225298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.845 [2024-12-05 19:31:04.227950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.845 [2024-12-05 19:31:04.228186] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.845 [2024-12-05 19:31:04.228282] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.845 [2024-12-05 19:31:04.228310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.845 [2024-12-05 19:31:04.228327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:10.845 request: 00:11:10.845 { 00:11:10.845 "name": "raid_bdev1", 00:11:10.845 "raid_level": "concat", 00:11:10.845 "base_bdevs": [ 00:11:10.845 "malloc1", 00:11:10.845 "malloc2" 00:11:10.845 ], 00:11:10.845 "strip_size_kb": 64, 00:11:10.845 "superblock": false, 00:11:10.845 "method": "bdev_raid_create", 00:11:10.845 "req_id": 1 00:11:10.845 } 00:11:10.845 Got JSON-RPC error response 00:11:10.845 response: 00:11:10.845 { 00:11:10.845 "code": -17, 00:11:10.845 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.845 } 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.845 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.105 [2024-12-05 19:31:04.289280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.105 [2024-12-05 19:31:04.289376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.105 [2024-12-05 19:31:04.289405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:11.105 [2024-12-05 19:31:04.289423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.105 [2024-12-05 19:31:04.292504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.105 [2024-12-05 19:31:04.292739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.105 [2024-12-05 19:31:04.292878] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:11.105 [2024-12-05 19:31:04.292970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.105 pt1 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.105 "name": "raid_bdev1", 00:11:11.105 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:11.105 "strip_size_kb": 64, 00:11:11.105 "state": "configuring", 00:11:11.105 "raid_level": "concat", 00:11:11.105 "superblock": true, 00:11:11.105 "num_base_bdevs": 2, 00:11:11.105 "num_base_bdevs_discovered": 1, 00:11:11.105 "num_base_bdevs_operational": 2, 00:11:11.105 "base_bdevs_list": [ 00:11:11.105 { 00:11:11.105 "name": "pt1", 00:11:11.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.105 "is_configured": true, 00:11:11.105 "data_offset": 2048, 00:11:11.105 "data_size": 63488 00:11:11.105 }, 00:11:11.105 { 00:11:11.105 "name": null, 00:11:11.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.105 "is_configured": false, 00:11:11.105 "data_offset": 2048, 00:11:11.105 "data_size": 63488 00:11:11.105 } 00:11:11.105 ] 00:11:11.105 }' 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.105 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.364 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.364 [2024-12-05 19:31:04.761440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.364 [2024-12-05 19:31:04.761532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.365 [2024-12-05 19:31:04.761565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:11.365 [2024-12-05 19:31:04.761582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.365 [2024-12-05 19:31:04.762185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.365 [2024-12-05 19:31:04.762223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.365 [2024-12-05 19:31:04.762335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.365 [2024-12-05 19:31:04.762375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.365 [2024-12-05 19:31:04.762518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.365 [2024-12-05 19:31:04.762544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:11.365 [2024-12-05 19:31:04.762862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:11.365 [2024-12-05 19:31:04.763059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.365 [2024-12-05 19:31:04.763081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:11.365 [2024-12-05 19:31:04.763250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.365 pt2 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.365 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.624 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.624 "name": "raid_bdev1", 00:11:11.624 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:11.624 "strip_size_kb": 64, 00:11:11.624 "state": "online", 00:11:11.624 "raid_level": "concat", 00:11:11.624 "superblock": true, 00:11:11.624 "num_base_bdevs": 2, 00:11:11.624 "num_base_bdevs_discovered": 2, 00:11:11.624 "num_base_bdevs_operational": 2, 00:11:11.624 "base_bdevs_list": [ 00:11:11.624 { 00:11:11.624 "name": "pt1", 00:11:11.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.624 "is_configured": true, 00:11:11.624 "data_offset": 2048, 00:11:11.624 "data_size": 63488 00:11:11.624 }, 00:11:11.624 { 00:11:11.624 "name": "pt2", 00:11:11.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.624 "is_configured": true, 00:11:11.624 "data_offset": 2048, 00:11:11.624 "data_size": 63488 00:11:11.624 } 00:11:11.624 ] 00:11:11.624 }' 00:11:11.624 19:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.624 19:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.882 [2024-12-05 19:31:05.249912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.882 "name": "raid_bdev1", 00:11:11.882 "aliases": [ 00:11:11.882 "4fab07e0-89ad-4988-84b0-5dde3ce00fbe" 00:11:11.882 ], 00:11:11.882 "product_name": "Raid Volume", 00:11:11.882 "block_size": 512, 00:11:11.882 "num_blocks": 126976, 00:11:11.882 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:11.882 "assigned_rate_limits": { 00:11:11.882 "rw_ios_per_sec": 0, 00:11:11.882 "rw_mbytes_per_sec": 0, 00:11:11.882 "r_mbytes_per_sec": 0, 00:11:11.882 "w_mbytes_per_sec": 0 00:11:11.882 }, 00:11:11.882 "claimed": false, 00:11:11.882 "zoned": false, 00:11:11.882 "supported_io_types": { 00:11:11.882 "read": true, 00:11:11.882 "write": true, 00:11:11.882 "unmap": true, 00:11:11.882 "flush": true, 00:11:11.882 "reset": true, 00:11:11.882 "nvme_admin": false, 00:11:11.882 "nvme_io": false, 00:11:11.882 "nvme_io_md": false, 00:11:11.882 "write_zeroes": true, 00:11:11.882 "zcopy": false, 00:11:11.882 "get_zone_info": false, 00:11:11.882 "zone_management": false, 00:11:11.882 "zone_append": false, 00:11:11.882 "compare": false, 00:11:11.882 "compare_and_write": false, 00:11:11.882 "abort": false, 00:11:11.882 "seek_hole": false, 00:11:11.882 "seek_data": false, 00:11:11.882 "copy": false, 00:11:11.882 "nvme_iov_md": false 00:11:11.882 }, 00:11:11.882 "memory_domains": [ 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "system", 00:11:11.882 "dma_device_type": 1 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.882 "dma_device_type": 2 00:11:11.882 } 00:11:11.882 ], 00:11:11.882 "driver_specific": { 00:11:11.882 "raid": { 00:11:11.882 "uuid": "4fab07e0-89ad-4988-84b0-5dde3ce00fbe", 00:11:11.882 "strip_size_kb": 64, 00:11:11.882 "state": "online", 00:11:11.882 "raid_level": "concat", 00:11:11.882 "superblock": true, 00:11:11.882 "num_base_bdevs": 2, 00:11:11.882 "num_base_bdevs_discovered": 2, 00:11:11.882 "num_base_bdevs_operational": 2, 00:11:11.882 "base_bdevs_list": [ 00:11:11.882 { 00:11:11.882 "name": "pt1", 00:11:11.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.882 "is_configured": true, 00:11:11.882 "data_offset": 2048, 00:11:11.882 "data_size": 63488 00:11:11.882 }, 00:11:11.882 { 00:11:11.882 "name": "pt2", 00:11:11.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.882 "is_configured": true, 00:11:11.882 "data_offset": 2048, 00:11:11.882 "data_size": 63488 00:11:11.882 } 00:11:11.882 ] 00:11:11.882 } 00:11:11.882 } 00:11:11.882 }' 00:11:11.882 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:12.140 pt2' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:12.140 [2024-12-05 19:31:05.485971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4fab07e0-89ad-4988-84b0-5dde3ce00fbe '!=' 4fab07e0-89ad-4988-84b0-5dde3ce00fbe ']' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62154 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62154 ']' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62154 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62154 00:11:12.140 killing process with pid 62154 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62154' 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62154 00:11:12.140 [2024-12-05 19:31:05.562395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.140 19:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62154 00:11:12.140 [2024-12-05 19:31:05.562515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.140 [2024-12-05 19:31:05.562584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.140 [2024-12-05 19:31:05.562602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.398 [2024-12-05 19:31:05.751016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.771 19:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:13.771 00:11:13.771 real 0m4.758s 00:11:13.771 user 0m6.895s 00:11:13.771 sys 0m0.729s 00:11:13.771 ************************************ 00:11:13.771 END TEST raid_superblock_test 00:11:13.771 ************************************ 00:11:13.771 19:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.771 19:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 19:31:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:13.771 19:31:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.771 19:31:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.771 19:31:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 ************************************ 00:11:13.771 START TEST raid_read_error_test 00:11:13.771 ************************************ 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dDgLberpJ0 00:11:13.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62369 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62369 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62369 ']' 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.771 19:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 [2024-12-05 19:31:06.981348] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:13.771 [2024-12-05 19:31:06.981530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62369 ] 00:11:13.771 [2024-12-05 19:31:07.160197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.030 [2024-12-05 19:31:07.301723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.288 [2024-12-05 19:31:07.508095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.288 [2024-12-05 19:31:07.508373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.598 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 BaseBdev1_malloc 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 true 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 [2024-12-05 19:31:08.072476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.859 [2024-12-05 19:31:08.072549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.859 [2024-12-05 19:31:08.072578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.859 [2024-12-05 19:31:08.072596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.859 [2024-12-05 19:31:08.075543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.859 [2024-12-05 19:31:08.075771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.859 BaseBdev1 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 BaseBdev2_malloc 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 true 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 [2024-12-05 19:31:08.132684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.859 [2024-12-05 19:31:08.132970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.859 [2024-12-05 19:31:08.133016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.859 [2024-12-05 19:31:08.133036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.859 [2024-12-05 19:31:08.135842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.859 [2024-12-05 19:31:08.135906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.859 BaseBdev2 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 [2024-12-05 19:31:08.140828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.859 [2024-12-05 19:31:08.143269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.859 [2024-12-05 19:31:08.143499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.859 [2024-12-05 19:31:08.143521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:14.859 [2024-12-05 19:31:08.143853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:14.859 [2024-12-05 19:31:08.144118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.859 [2024-12-05 19:31:08.144138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:14.859 [2024-12-05 19:31:08.144309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.859 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.859 "name": "raid_bdev1", 00:11:14.859 "uuid": "f83cf532-7e68-4394-a0d7-5622c088619b", 00:11:14.860 "strip_size_kb": 64, 00:11:14.860 "state": "online", 00:11:14.860 "raid_level": "concat", 00:11:14.860 "superblock": true, 00:11:14.860 "num_base_bdevs": 2, 00:11:14.860 "num_base_bdevs_discovered": 2, 00:11:14.860 "num_base_bdevs_operational": 2, 00:11:14.860 "base_bdevs_list": [ 00:11:14.860 { 00:11:14.860 "name": "BaseBdev1", 00:11:14.860 "uuid": "4c31b9c1-7bd6-58ac-9e9d-627868a45c5f", 00:11:14.860 "is_configured": true, 00:11:14.860 "data_offset": 2048, 00:11:14.860 "data_size": 63488 00:11:14.860 }, 00:11:14.860 { 00:11:14.860 "name": "BaseBdev2", 00:11:14.860 "uuid": "3313d01e-aa5e-5237-9a25-0aaeadc62e4f", 00:11:14.860 "is_configured": true, 00:11:14.860 "data_offset": 2048, 00:11:14.860 "data_size": 63488 00:11:14.860 } 00:11:14.860 ] 00:11:14.860 }' 00:11:14.860 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.860 19:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.428 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:15.428 19:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.687 [2024-12-05 19:31:08.870648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:16.624 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.625 "name": "raid_bdev1", 00:11:16.625 "uuid": "f83cf532-7e68-4394-a0d7-5622c088619b", 00:11:16.625 "strip_size_kb": 64, 00:11:16.625 "state": "online", 00:11:16.625 "raid_level": "concat", 00:11:16.625 "superblock": true, 00:11:16.625 "num_base_bdevs": 2, 00:11:16.625 "num_base_bdevs_discovered": 2, 00:11:16.625 "num_base_bdevs_operational": 2, 00:11:16.625 "base_bdevs_list": [ 00:11:16.625 { 00:11:16.625 "name": "BaseBdev1", 00:11:16.625 "uuid": "4c31b9c1-7bd6-58ac-9e9d-627868a45c5f", 00:11:16.625 "is_configured": true, 00:11:16.625 "data_offset": 2048, 00:11:16.625 "data_size": 63488 00:11:16.625 }, 00:11:16.625 { 00:11:16.625 "name": "BaseBdev2", 00:11:16.625 "uuid": "3313d01e-aa5e-5237-9a25-0aaeadc62e4f", 00:11:16.625 "is_configured": true, 00:11:16.625 "data_offset": 2048, 00:11:16.625 "data_size": 63488 00:11:16.625 } 00:11:16.625 ] 00:11:16.625 }' 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.625 19:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.885 [2024-12-05 19:31:10.309137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.885 [2024-12-05 19:31:10.309177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.885 [2024-12-05 19:31:10.312862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.885 [2024-12-05 19:31:10.312921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.885 [2024-12-05 19:31:10.312967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.885 [2024-12-05 19:31:10.312985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:16.885 { 00:11:16.885 "results": [ 00:11:16.885 { 00:11:16.885 "job": "raid_bdev1", 00:11:16.885 "core_mask": "0x1", 00:11:16.885 "workload": "randrw", 00:11:16.885 "percentage": 50, 00:11:16.885 "status": "finished", 00:11:16.885 "queue_depth": 1, 00:11:16.885 "io_size": 131072, 00:11:16.885 "runtime": 1.43589, 00:11:16.885 "iops": 10220.838643628691, 00:11:16.885 "mibps": 1277.6048304535864, 00:11:16.885 "io_failed": 1, 00:11:16.885 "io_timeout": 0, 00:11:16.885 "avg_latency_us": 136.35065476596037, 00:11:16.885 "min_latency_us": 39.79636363636364, 00:11:16.885 "max_latency_us": 1824.581818181818 00:11:16.885 } 00:11:16.885 ], 00:11:16.885 "core_count": 1 00:11:16.885 } 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62369 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62369 ']' 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62369 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.885 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62369 00:11:17.144 killing process with pid 62369 00:11:17.144 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.144 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.144 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62369' 00:11:17.144 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62369 00:11:17.144 [2024-12-05 19:31:10.353875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.144 19:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62369 00:11:17.144 [2024-12-05 19:31:10.472850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dDgLberpJ0 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:18.519 00:11:18.519 real 0m4.787s 00:11:18.519 user 0m6.059s 00:11:18.519 sys 0m0.594s 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.519 19:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.519 ************************************ 00:11:18.519 END TEST raid_read_error_test 00:11:18.519 ************************************ 00:11:18.519 19:31:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:18.519 19:31:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.519 19:31:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.519 19:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.519 ************************************ 00:11:18.519 START TEST raid_write_error_test 00:11:18.520 ************************************ 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hm68Mz4YOd 00:11:18.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62511 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62511 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62511 ']' 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.520 19:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 [2024-12-05 19:31:11.813051] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:18.520 [2024-12-05 19:31:11.813213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62511 ] 00:11:18.778 [2024-12-05 19:31:11.986556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.778 [2024-12-05 19:31:12.115193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.038 [2024-12-05 19:31:12.336487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.038 [2024-12-05 19:31:12.336549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.606 BaseBdev1_malloc 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.606 true 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.606 [2024-12-05 19:31:12.976741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.606 [2024-12-05 19:31:12.976823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.606 [2024-12-05 19:31:12.976852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.606 [2024-12-05 19:31:12.976869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.606 [2024-12-05 19:31:12.979858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.606 [2024-12-05 19:31:12.980058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.606 BaseBdev1 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.606 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.607 19:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 BaseBdev2_malloc 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 true 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 [2024-12-05 19:31:13.039340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.607 [2024-12-05 19:31:13.039421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.607 [2024-12-05 19:31:13.039445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.607 [2024-12-05 19:31:13.039460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.607 [2024-12-05 19:31:13.042328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.607 [2024-12-05 19:31:13.042393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.607 BaseBdev2 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.607 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.866 [2024-12-05 19:31:13.051390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.866 [2024-12-05 19:31:13.054051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.866 [2024-12-05 19:31:13.054472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.866 [2024-12-05 19:31:13.054603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:19.866 [2024-12-05 19:31:13.055011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:19.866 [2024-12-05 19:31:13.055262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.866 [2024-12-05 19:31:13.055280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.866 [2024-12-05 19:31:13.055528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.866 "name": "raid_bdev1", 00:11:19.866 "uuid": "0cfd407d-53de-4a99-85ef-f6f19a7d57d9", 00:11:19.866 "strip_size_kb": 64, 00:11:19.866 "state": "online", 00:11:19.866 "raid_level": "concat", 00:11:19.866 "superblock": true, 00:11:19.866 "num_base_bdevs": 2, 00:11:19.866 "num_base_bdevs_discovered": 2, 00:11:19.866 "num_base_bdevs_operational": 2, 00:11:19.866 "base_bdevs_list": [ 00:11:19.866 { 00:11:19.866 "name": "BaseBdev1", 00:11:19.866 "uuid": "3eb33ef0-9d00-5f74-bacd-4e48c7b4bb3b", 00:11:19.866 "is_configured": true, 00:11:19.866 "data_offset": 2048, 00:11:19.866 "data_size": 63488 00:11:19.866 }, 00:11:19.866 { 00:11:19.866 "name": "BaseBdev2", 00:11:19.866 "uuid": "3413de2a-13fb-5fc4-9940-792a01e77e6e", 00:11:19.866 "is_configured": true, 00:11:19.866 "data_offset": 2048, 00:11:19.866 "data_size": 63488 00:11:19.866 } 00:11:19.866 ] 00:11:19.866 }' 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.866 19:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.455 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.455 19:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.455 [2024-12-05 19:31:13.725137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.393 "name": "raid_bdev1", 00:11:21.393 "uuid": "0cfd407d-53de-4a99-85ef-f6f19a7d57d9", 00:11:21.393 "strip_size_kb": 64, 00:11:21.393 "state": "online", 00:11:21.393 "raid_level": "concat", 00:11:21.393 "superblock": true, 00:11:21.393 "num_base_bdevs": 2, 00:11:21.393 "num_base_bdevs_discovered": 2, 00:11:21.393 "num_base_bdevs_operational": 2, 00:11:21.393 "base_bdevs_list": [ 00:11:21.393 { 00:11:21.393 "name": "BaseBdev1", 00:11:21.393 "uuid": "3eb33ef0-9d00-5f74-bacd-4e48c7b4bb3b", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 }, 00:11:21.393 { 00:11:21.393 "name": "BaseBdev2", 00:11:21.393 "uuid": "3413de2a-13fb-5fc4-9940-792a01e77e6e", 00:11:21.393 "is_configured": true, 00:11:21.393 "data_offset": 2048, 00:11:21.393 "data_size": 63488 00:11:21.393 } 00:11:21.393 ] 00:11:21.393 }' 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.393 19:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.961 [2024-12-05 19:31:15.151746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.961 [2024-12-05 19:31:15.151793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.961 [2024-12-05 19:31:15.155449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.961 [2024-12-05 19:31:15.155666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.961 [2024-12-05 19:31:15.155867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.961 [2024-12-05 19:31:15.156017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:21.961 { 00:11:21.961 "results": [ 00:11:21.961 { 00:11:21.961 "job": "raid_bdev1", 00:11:21.961 "core_mask": "0x1", 00:11:21.961 "workload": "randrw", 00:11:21.961 "percentage": 50, 00:11:21.961 "status": "finished", 00:11:21.961 "queue_depth": 1, 00:11:21.961 "io_size": 131072, 00:11:21.961 "runtime": 1.42423, 00:11:21.961 "iops": 10687.880468744515, 00:11:21.961 "mibps": 1335.9850585930644, 00:11:21.961 "io_failed": 1, 00:11:21.961 "io_timeout": 0, 00:11:21.961 "avg_latency_us": 130.44054056959268, 00:11:21.961 "min_latency_us": 37.93454545454546, 00:11:21.961 "max_latency_us": 1787.3454545454545 00:11:21.961 } 00:11:21.961 ], 00:11:21.961 "core_count": 1 00:11:21.961 } 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62511 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62511 ']' 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62511 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62511 00:11:21.961 killing process with pid 62511 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62511' 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62511 00:11:21.961 [2024-12-05 19:31:15.195210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.961 19:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62511 00:11:21.961 [2024-12-05 19:31:15.323996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hm68Mz4YOd 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:23.338 00:11:23.338 real 0m4.688s 00:11:23.338 user 0m5.965s 00:11:23.338 sys 0m0.601s 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.338 19:31:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.338 ************************************ 00:11:23.338 END TEST raid_write_error_test 00:11:23.338 ************************************ 00:11:23.338 19:31:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.338 19:31:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:23.338 19:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.338 19:31:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.338 19:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.338 ************************************ 00:11:23.338 START TEST raid_state_function_test 00:11:23.338 ************************************ 00:11:23.338 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:23.338 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.339 Process raid pid: 62655 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62655 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62655' 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62655 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62655 ']' 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.339 19:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.339 [2024-12-05 19:31:16.567846] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:23.339 [2024-12-05 19:31:16.568222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.339 [2024-12-05 19:31:16.745106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.597 [2024-12-05 19:31:16.893562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.856 [2024-12-05 19:31:17.126504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.856 [2024-12-05 19:31:17.126792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.423 [2024-12-05 19:31:17.611996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.423 [2024-12-05 19:31:17.612197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.423 [2024-12-05 19:31:17.612226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.423 [2024-12-05 19:31:17.612244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.423 "name": "Existed_Raid", 00:11:24.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.423 "strip_size_kb": 0, 00:11:24.423 "state": "configuring", 00:11:24.423 "raid_level": "raid1", 00:11:24.423 "superblock": false, 00:11:24.423 "num_base_bdevs": 2, 00:11:24.423 "num_base_bdevs_discovered": 0, 00:11:24.423 "num_base_bdevs_operational": 2, 00:11:24.423 "base_bdevs_list": [ 00:11:24.423 { 00:11:24.423 "name": "BaseBdev1", 00:11:24.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.423 "is_configured": false, 00:11:24.423 "data_offset": 0, 00:11:24.423 "data_size": 0 00:11:24.423 }, 00:11:24.423 { 00:11:24.423 "name": "BaseBdev2", 00:11:24.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.423 "is_configured": false, 00:11:24.423 "data_offset": 0, 00:11:24.423 "data_size": 0 00:11:24.423 } 00:11:24.423 ] 00:11:24.423 }' 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.423 19:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 [2024-12-05 19:31:18.200152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.016 [2024-12-05 19:31:18.200398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 [2024-12-05 19:31:18.212136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.016 [2024-12-05 19:31:18.212219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.016 [2024-12-05 19:31:18.212249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.016 [2024-12-05 19:31:18.212266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 [2024-12-05 19:31:18.256721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.016 BaseBdev1 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 [ 00:11:25.016 { 00:11:25.016 "name": "BaseBdev1", 00:11:25.016 "aliases": [ 00:11:25.016 "4dc71fdd-9d10-4fbe-97b6-063efb0f065b" 00:11:25.016 ], 00:11:25.016 "product_name": "Malloc disk", 00:11:25.016 "block_size": 512, 00:11:25.016 "num_blocks": 65536, 00:11:25.016 "uuid": "4dc71fdd-9d10-4fbe-97b6-063efb0f065b", 00:11:25.016 "assigned_rate_limits": { 00:11:25.016 "rw_ios_per_sec": 0, 00:11:25.016 "rw_mbytes_per_sec": 0, 00:11:25.016 "r_mbytes_per_sec": 0, 00:11:25.016 "w_mbytes_per_sec": 0 00:11:25.016 }, 00:11:25.016 "claimed": true, 00:11:25.016 "claim_type": "exclusive_write", 00:11:25.016 "zoned": false, 00:11:25.016 "supported_io_types": { 00:11:25.016 "read": true, 00:11:25.016 "write": true, 00:11:25.016 "unmap": true, 00:11:25.016 "flush": true, 00:11:25.016 "reset": true, 00:11:25.016 "nvme_admin": false, 00:11:25.016 "nvme_io": false, 00:11:25.016 "nvme_io_md": false, 00:11:25.016 "write_zeroes": true, 00:11:25.016 "zcopy": true, 00:11:25.016 "get_zone_info": false, 00:11:25.016 "zone_management": false, 00:11:25.016 "zone_append": false, 00:11:25.016 "compare": false, 00:11:25.016 "compare_and_write": false, 00:11:25.016 "abort": true, 00:11:25.016 "seek_hole": false, 00:11:25.016 "seek_data": false, 00:11:25.016 "copy": true, 00:11:25.016 "nvme_iov_md": false 00:11:25.016 }, 00:11:25.016 "memory_domains": [ 00:11:25.016 { 00:11:25.016 "dma_device_id": "system", 00:11:25.016 "dma_device_type": 1 00:11:25.016 }, 00:11:25.016 { 00:11:25.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.016 "dma_device_type": 2 00:11:25.016 } 00:11:25.016 ], 00:11:25.016 "driver_specific": {} 00:11:25.016 } 00:11:25.016 ] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.016 "name": "Existed_Raid", 00:11:25.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.016 "strip_size_kb": 0, 00:11:25.016 "state": "configuring", 00:11:25.016 "raid_level": "raid1", 00:11:25.016 "superblock": false, 00:11:25.016 "num_base_bdevs": 2, 00:11:25.016 "num_base_bdevs_discovered": 1, 00:11:25.016 "num_base_bdevs_operational": 2, 00:11:25.016 "base_bdevs_list": [ 00:11:25.016 { 00:11:25.016 "name": "BaseBdev1", 00:11:25.016 "uuid": "4dc71fdd-9d10-4fbe-97b6-063efb0f065b", 00:11:25.016 "is_configured": true, 00:11:25.016 "data_offset": 0, 00:11:25.016 "data_size": 65536 00:11:25.016 }, 00:11:25.016 { 00:11:25.016 "name": "BaseBdev2", 00:11:25.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.016 "is_configured": false, 00:11:25.016 "data_offset": 0, 00:11:25.016 "data_size": 0 00:11:25.016 } 00:11:25.016 ] 00:11:25.016 }' 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.016 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.581 [2024-12-05 19:31:18.841017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.581 [2024-12-05 19:31:18.841108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.581 [2024-12-05 19:31:18.849049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.581 [2024-12-05 19:31:18.851973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.581 [2024-12-05 19:31:18.852349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.581 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.582 "name": "Existed_Raid", 00:11:25.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.582 "strip_size_kb": 0, 00:11:25.582 "state": "configuring", 00:11:25.582 "raid_level": "raid1", 00:11:25.582 "superblock": false, 00:11:25.582 "num_base_bdevs": 2, 00:11:25.582 "num_base_bdevs_discovered": 1, 00:11:25.582 "num_base_bdevs_operational": 2, 00:11:25.582 "base_bdevs_list": [ 00:11:25.582 { 00:11:25.582 "name": "BaseBdev1", 00:11:25.582 "uuid": "4dc71fdd-9d10-4fbe-97b6-063efb0f065b", 00:11:25.582 "is_configured": true, 00:11:25.582 "data_offset": 0, 00:11:25.582 "data_size": 65536 00:11:25.582 }, 00:11:25.582 { 00:11:25.582 "name": "BaseBdev2", 00:11:25.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.582 "is_configured": false, 00:11:25.582 "data_offset": 0, 00:11:25.582 "data_size": 0 00:11:25.582 } 00:11:25.582 ] 00:11:25.582 }' 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.582 19:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 [2024-12-05 19:31:19.440048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.147 [2024-12-05 19:31:19.440151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.147 [2024-12-05 19:31:19.440168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:26.147 [2024-12-05 19:31:19.440563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:26.147 [2024-12-05 19:31:19.440873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.147 [2024-12-05 19:31:19.440901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.147 [2024-12-05 19:31:19.441300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.147 BaseBdev2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 [ 00:11:26.147 { 00:11:26.147 "name": "BaseBdev2", 00:11:26.147 "aliases": [ 00:11:26.147 "ae962cae-d2f5-4672-8355-7e59c13e453e" 00:11:26.147 ], 00:11:26.147 "product_name": "Malloc disk", 00:11:26.147 "block_size": 512, 00:11:26.147 "num_blocks": 65536, 00:11:26.147 "uuid": "ae962cae-d2f5-4672-8355-7e59c13e453e", 00:11:26.147 "assigned_rate_limits": { 00:11:26.147 "rw_ios_per_sec": 0, 00:11:26.147 "rw_mbytes_per_sec": 0, 00:11:26.147 "r_mbytes_per_sec": 0, 00:11:26.147 "w_mbytes_per_sec": 0 00:11:26.147 }, 00:11:26.147 "claimed": true, 00:11:26.147 "claim_type": "exclusive_write", 00:11:26.147 "zoned": false, 00:11:26.147 "supported_io_types": { 00:11:26.147 "read": true, 00:11:26.147 "write": true, 00:11:26.147 "unmap": true, 00:11:26.147 "flush": true, 00:11:26.147 "reset": true, 00:11:26.147 "nvme_admin": false, 00:11:26.147 "nvme_io": false, 00:11:26.147 "nvme_io_md": false, 00:11:26.147 "write_zeroes": true, 00:11:26.147 "zcopy": true, 00:11:26.147 "get_zone_info": false, 00:11:26.147 "zone_management": false, 00:11:26.147 "zone_append": false, 00:11:26.147 "compare": false, 00:11:26.147 "compare_and_write": false, 00:11:26.147 "abort": true, 00:11:26.147 "seek_hole": false, 00:11:26.147 "seek_data": false, 00:11:26.147 "copy": true, 00:11:26.147 "nvme_iov_md": false 00:11:26.147 }, 00:11:26.147 "memory_domains": [ 00:11:26.147 { 00:11:26.147 "dma_device_id": "system", 00:11:26.147 "dma_device_type": 1 00:11:26.147 }, 00:11:26.147 { 00:11:26.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.147 "dma_device_type": 2 00:11:26.147 } 00:11:26.147 ], 00:11:26.147 "driver_specific": {} 00:11:26.147 } 00:11:26.147 ] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.147 "name": "Existed_Raid", 00:11:26.147 "uuid": "60de33a6-5d4f-4032-a292-c453d037bdc7", 00:11:26.147 "strip_size_kb": 0, 00:11:26.147 "state": "online", 00:11:26.147 "raid_level": "raid1", 00:11:26.147 "superblock": false, 00:11:26.147 "num_base_bdevs": 2, 00:11:26.147 "num_base_bdevs_discovered": 2, 00:11:26.147 "num_base_bdevs_operational": 2, 00:11:26.147 "base_bdevs_list": [ 00:11:26.147 { 00:11:26.147 "name": "BaseBdev1", 00:11:26.147 "uuid": "4dc71fdd-9d10-4fbe-97b6-063efb0f065b", 00:11:26.147 "is_configured": true, 00:11:26.147 "data_offset": 0, 00:11:26.147 "data_size": 65536 00:11:26.147 }, 00:11:26.147 { 00:11:26.147 "name": "BaseBdev2", 00:11:26.147 "uuid": "ae962cae-d2f5-4672-8355-7e59c13e453e", 00:11:26.147 "is_configured": true, 00:11:26.147 "data_offset": 0, 00:11:26.147 "data_size": 65536 00:11:26.147 } 00:11:26.147 ] 00:11:26.147 }' 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.147 19:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.715 [2024-12-05 19:31:20.016625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.715 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.715 "name": "Existed_Raid", 00:11:26.715 "aliases": [ 00:11:26.715 "60de33a6-5d4f-4032-a292-c453d037bdc7" 00:11:26.715 ], 00:11:26.715 "product_name": "Raid Volume", 00:11:26.715 "block_size": 512, 00:11:26.715 "num_blocks": 65536, 00:11:26.715 "uuid": "60de33a6-5d4f-4032-a292-c453d037bdc7", 00:11:26.715 "assigned_rate_limits": { 00:11:26.715 "rw_ios_per_sec": 0, 00:11:26.715 "rw_mbytes_per_sec": 0, 00:11:26.715 "r_mbytes_per_sec": 0, 00:11:26.715 "w_mbytes_per_sec": 0 00:11:26.715 }, 00:11:26.715 "claimed": false, 00:11:26.715 "zoned": false, 00:11:26.715 "supported_io_types": { 00:11:26.715 "read": true, 00:11:26.715 "write": true, 00:11:26.715 "unmap": false, 00:11:26.715 "flush": false, 00:11:26.715 "reset": true, 00:11:26.715 "nvme_admin": false, 00:11:26.715 "nvme_io": false, 00:11:26.715 "nvme_io_md": false, 00:11:26.715 "write_zeroes": true, 00:11:26.715 "zcopy": false, 00:11:26.715 "get_zone_info": false, 00:11:26.715 "zone_management": false, 00:11:26.715 "zone_append": false, 00:11:26.715 "compare": false, 00:11:26.715 "compare_and_write": false, 00:11:26.715 "abort": false, 00:11:26.715 "seek_hole": false, 00:11:26.715 "seek_data": false, 00:11:26.715 "copy": false, 00:11:26.715 "nvme_iov_md": false 00:11:26.715 }, 00:11:26.715 "memory_domains": [ 00:11:26.715 { 00:11:26.715 "dma_device_id": "system", 00:11:26.715 "dma_device_type": 1 00:11:26.715 }, 00:11:26.715 { 00:11:26.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.715 "dma_device_type": 2 00:11:26.715 }, 00:11:26.715 { 00:11:26.715 "dma_device_id": "system", 00:11:26.715 "dma_device_type": 1 00:11:26.715 }, 00:11:26.715 { 00:11:26.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.715 "dma_device_type": 2 00:11:26.715 } 00:11:26.715 ], 00:11:26.715 "driver_specific": { 00:11:26.715 "raid": { 00:11:26.715 "uuid": "60de33a6-5d4f-4032-a292-c453d037bdc7", 00:11:26.715 "strip_size_kb": 0, 00:11:26.715 "state": "online", 00:11:26.715 "raid_level": "raid1", 00:11:26.715 "superblock": false, 00:11:26.715 "num_base_bdevs": 2, 00:11:26.715 "num_base_bdevs_discovered": 2, 00:11:26.715 "num_base_bdevs_operational": 2, 00:11:26.715 "base_bdevs_list": [ 00:11:26.715 { 00:11:26.715 "name": "BaseBdev1", 00:11:26.715 "uuid": "4dc71fdd-9d10-4fbe-97b6-063efb0f065b", 00:11:26.715 "is_configured": true, 00:11:26.715 "data_offset": 0, 00:11:26.715 "data_size": 65536 00:11:26.715 }, 00:11:26.715 { 00:11:26.715 "name": "BaseBdev2", 00:11:26.715 "uuid": "ae962cae-d2f5-4672-8355-7e59c13e453e", 00:11:26.715 "is_configured": true, 00:11:26.716 "data_offset": 0, 00:11:26.716 "data_size": 65536 00:11:26.716 } 00:11:26.716 ] 00:11:26.716 } 00:11:26.716 } 00:11:26.716 }' 00:11:26.716 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.716 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.716 BaseBdev2' 00:11:26.716 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.974 [2024-12-05 19:31:20.280461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.974 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.232 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.232 "name": "Existed_Raid", 00:11:27.232 "uuid": "60de33a6-5d4f-4032-a292-c453d037bdc7", 00:11:27.232 "strip_size_kb": 0, 00:11:27.232 "state": "online", 00:11:27.232 "raid_level": "raid1", 00:11:27.232 "superblock": false, 00:11:27.232 "num_base_bdevs": 2, 00:11:27.232 "num_base_bdevs_discovered": 1, 00:11:27.232 "num_base_bdevs_operational": 1, 00:11:27.232 "base_bdevs_list": [ 00:11:27.232 { 00:11:27.232 "name": null, 00:11:27.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.232 "is_configured": false, 00:11:27.232 "data_offset": 0, 00:11:27.232 "data_size": 65536 00:11:27.232 }, 00:11:27.232 { 00:11:27.232 "name": "BaseBdev2", 00:11:27.232 "uuid": "ae962cae-d2f5-4672-8355-7e59c13e453e", 00:11:27.232 "is_configured": true, 00:11:27.232 "data_offset": 0, 00:11:27.232 "data_size": 65536 00:11:27.232 } 00:11:27.232 ] 00:11:27.232 }' 00:11:27.232 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.232 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.819 19:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.819 [2024-12-05 19:31:20.980073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.819 [2024-12-05 19:31:20.980227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.819 [2024-12-05 19:31:21.090725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.819 [2024-12-05 19:31:21.091050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.819 [2024-12-05 19:31:21.091240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, s 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.819 tate offline 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62655 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62655 ']' 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62655 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:27.819 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62655 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62655' 00:11:27.820 killing process with pid 62655 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62655 00:11:27.820 [2024-12-05 19:31:21.179671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.820 19:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62655 00:11:27.820 [2024-12-05 19:31:21.198888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:29.196 ************************************ 00:11:29.196 END TEST raid_state_function_test 00:11:29.196 ************************************ 00:11:29.196 00:11:29.196 real 0m5.833s 00:11:29.196 user 0m8.850s 00:11:29.196 sys 0m0.747s 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.196 19:31:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:29.196 19:31:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.196 19:31:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.196 19:31:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.196 ************************************ 00:11:29.196 START TEST raid_state_function_test_sb 00:11:29.196 ************************************ 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.196 Process raid pid: 62919 00:11:29.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62919 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62919' 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62919 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62919 ']' 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.196 19:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.196 [2024-12-05 19:31:22.451582] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:29.196 [2024-12-05 19:31:22.452018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.196 [2024-12-05 19:31:22.625826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.455 [2024-12-05 19:31:22.753516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.714 [2024-12-05 19:31:22.961135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.714 [2024-12-05 19:31:22.961333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.282 [2024-12-05 19:31:23.452592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.282 [2024-12-05 19:31:23.452834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.282 [2024-12-05 19:31:23.452864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.282 [2024-12-05 19:31:23.452882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.282 "name": "Existed_Raid", 00:11:30.282 "uuid": "5df7e9e0-33ce-48ea-8312-91e1ed0067e7", 00:11:30.282 "strip_size_kb": 0, 00:11:30.282 "state": "configuring", 00:11:30.282 "raid_level": "raid1", 00:11:30.282 "superblock": true, 00:11:30.282 "num_base_bdevs": 2, 00:11:30.282 "num_base_bdevs_discovered": 0, 00:11:30.282 "num_base_bdevs_operational": 2, 00:11:30.282 "base_bdevs_list": [ 00:11:30.282 { 00:11:30.282 "name": "BaseBdev1", 00:11:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.282 "is_configured": false, 00:11:30.282 "data_offset": 0, 00:11:30.282 "data_size": 0 00:11:30.282 }, 00:11:30.282 { 00:11:30.282 "name": "BaseBdev2", 00:11:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.282 "is_configured": false, 00:11:30.282 "data_offset": 0, 00:11:30.282 "data_size": 0 00:11:30.282 } 00:11:30.282 ] 00:11:30.282 }' 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.282 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.541 [2024-12-05 19:31:23.952687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.541 [2024-12-05 19:31:23.953666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.541 [2024-12-05 19:31:23.960677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.541 [2024-12-05 19:31:23.960905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.541 [2024-12-05 19:31:23.960932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.541 [2024-12-05 19:31:23.960952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.541 19:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.811 [2024-12-05 19:31:24.007094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.811 BaseBdev1 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.811 [ 00:11:30.811 { 00:11:30.811 "name": "BaseBdev1", 00:11:30.811 "aliases": [ 00:11:30.811 "727e3de9-3372-436b-b1d5-64342fb7d83d" 00:11:30.811 ], 00:11:30.811 "product_name": "Malloc disk", 00:11:30.811 "block_size": 512, 00:11:30.811 "num_blocks": 65536, 00:11:30.811 "uuid": "727e3de9-3372-436b-b1d5-64342fb7d83d", 00:11:30.811 "assigned_rate_limits": { 00:11:30.811 "rw_ios_per_sec": 0, 00:11:30.811 "rw_mbytes_per_sec": 0, 00:11:30.811 "r_mbytes_per_sec": 0, 00:11:30.811 "w_mbytes_per_sec": 0 00:11:30.811 }, 00:11:30.811 "claimed": true, 00:11:30.811 "claim_type": "exclusive_write", 00:11:30.811 "zoned": false, 00:11:30.811 "supported_io_types": { 00:11:30.811 "read": true, 00:11:30.811 "write": true, 00:11:30.811 "unmap": true, 00:11:30.811 "flush": true, 00:11:30.811 "reset": true, 00:11:30.811 "nvme_admin": false, 00:11:30.811 "nvme_io": false, 00:11:30.811 "nvme_io_md": false, 00:11:30.811 "write_zeroes": true, 00:11:30.811 "zcopy": true, 00:11:30.811 "get_zone_info": false, 00:11:30.811 "zone_management": false, 00:11:30.811 "zone_append": false, 00:11:30.811 "compare": false, 00:11:30.811 "compare_and_write": false, 00:11:30.811 "abort": true, 00:11:30.811 "seek_hole": false, 00:11:30.811 "seek_data": false, 00:11:30.811 "copy": true, 00:11:30.811 "nvme_iov_md": false 00:11:30.811 }, 00:11:30.811 "memory_domains": [ 00:11:30.811 { 00:11:30.811 "dma_device_id": "system", 00:11:30.811 "dma_device_type": 1 00:11:30.811 }, 00:11:30.811 { 00:11:30.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.811 "dma_device_type": 2 00:11:30.811 } 00:11:30.811 ], 00:11:30.811 "driver_specific": {} 00:11:30.811 } 00:11:30.811 ] 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.811 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.812 "name": "Existed_Raid", 00:11:30.812 "uuid": "7e50aec7-696d-437d-ba9d-d22d0990acad", 00:11:30.812 "strip_size_kb": 0, 00:11:30.812 "state": "configuring", 00:11:30.812 "raid_level": "raid1", 00:11:30.812 "superblock": true, 00:11:30.812 "num_base_bdevs": 2, 00:11:30.812 "num_base_bdevs_discovered": 1, 00:11:30.812 "num_base_bdevs_operational": 2, 00:11:30.812 "base_bdevs_list": [ 00:11:30.812 { 00:11:30.812 "name": "BaseBdev1", 00:11:30.812 "uuid": "727e3de9-3372-436b-b1d5-64342fb7d83d", 00:11:30.812 "is_configured": true, 00:11:30.812 "data_offset": 2048, 00:11:30.812 "data_size": 63488 00:11:30.812 }, 00:11:30.812 { 00:11:30.812 "name": "BaseBdev2", 00:11:30.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.812 "is_configured": false, 00:11:30.812 "data_offset": 0, 00:11:30.812 "data_size": 0 00:11:30.812 } 00:11:30.812 ] 00:11:30.812 }' 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.812 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.380 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.380 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.380 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.380 [2024-12-05 19:31:24.539371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.380 [2024-12-05 19:31:24.539583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:31.380 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.380 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.381 [2024-12-05 19:31:24.551406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.381 [2024-12-05 19:31:24.554012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.381 [2024-12-05 19:31:24.554289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.381 "name": "Existed_Raid", 00:11:31.381 "uuid": "bf607f63-bf82-41c6-851c-e1d22bc0e474", 00:11:31.381 "strip_size_kb": 0, 00:11:31.381 "state": "configuring", 00:11:31.381 "raid_level": "raid1", 00:11:31.381 "superblock": true, 00:11:31.381 "num_base_bdevs": 2, 00:11:31.381 "num_base_bdevs_discovered": 1, 00:11:31.381 "num_base_bdevs_operational": 2, 00:11:31.381 "base_bdevs_list": [ 00:11:31.381 { 00:11:31.381 "name": "BaseBdev1", 00:11:31.381 "uuid": "727e3de9-3372-436b-b1d5-64342fb7d83d", 00:11:31.381 "is_configured": true, 00:11:31.381 "data_offset": 2048, 00:11:31.381 "data_size": 63488 00:11:31.381 }, 00:11:31.381 { 00:11:31.381 "name": "BaseBdev2", 00:11:31.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.381 "is_configured": false, 00:11:31.381 "data_offset": 0, 00:11:31.381 "data_size": 0 00:11:31.381 } 00:11:31.381 ] 00:11:31.381 }' 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.381 19:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.639 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.639 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.639 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.899 [2024-12-05 19:31:25.120026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.899 [2024-12-05 19:31:25.120493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:31.899 [2024-12-05 19:31:25.120520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.899 BaseBdev2 00:11:31.899 [2024-12-05 19:31:25.120878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:31.899 [2024-12-05 19:31:25.121089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:31.899 [2024-12-05 19:31:25.121113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:31.899 [2024-12-05 19:31:25.121304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.899 [ 00:11:31.899 { 00:11:31.899 "name": "BaseBdev2", 00:11:31.899 "aliases": [ 00:11:31.899 "532e816b-8937-462b-863a-af220b835705" 00:11:31.899 ], 00:11:31.899 "product_name": "Malloc disk", 00:11:31.899 "block_size": 512, 00:11:31.899 "num_blocks": 65536, 00:11:31.899 "uuid": "532e816b-8937-462b-863a-af220b835705", 00:11:31.899 "assigned_rate_limits": { 00:11:31.899 "rw_ios_per_sec": 0, 00:11:31.899 "rw_mbytes_per_sec": 0, 00:11:31.899 "r_mbytes_per_sec": 0, 00:11:31.899 "w_mbytes_per_sec": 0 00:11:31.899 }, 00:11:31.899 "claimed": true, 00:11:31.899 "claim_type": "exclusive_write", 00:11:31.899 "zoned": false, 00:11:31.899 "supported_io_types": { 00:11:31.899 "read": true, 00:11:31.899 "write": true, 00:11:31.899 "unmap": true, 00:11:31.899 "flush": true, 00:11:31.899 "reset": true, 00:11:31.899 "nvme_admin": false, 00:11:31.899 "nvme_io": false, 00:11:31.899 "nvme_io_md": false, 00:11:31.899 "write_zeroes": true, 00:11:31.899 "zcopy": true, 00:11:31.899 "get_zone_info": false, 00:11:31.899 "zone_management": false, 00:11:31.899 "zone_append": false, 00:11:31.899 "compare": false, 00:11:31.899 "compare_and_write": false, 00:11:31.899 "abort": true, 00:11:31.899 "seek_hole": false, 00:11:31.899 "seek_data": false, 00:11:31.899 "copy": true, 00:11:31.899 "nvme_iov_md": false 00:11:31.899 }, 00:11:31.899 "memory_domains": [ 00:11:31.899 { 00:11:31.899 "dma_device_id": "system", 00:11:31.899 "dma_device_type": 1 00:11:31.899 }, 00:11:31.899 { 00:11:31.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.899 "dma_device_type": 2 00:11:31.899 } 00:11:31.899 ], 00:11:31.899 "driver_specific": {} 00:11:31.899 } 00:11:31.899 ] 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.899 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.900 "name": "Existed_Raid", 00:11:31.900 "uuid": "bf607f63-bf82-41c6-851c-e1d22bc0e474", 00:11:31.900 "strip_size_kb": 0, 00:11:31.900 "state": "online", 00:11:31.900 "raid_level": "raid1", 00:11:31.900 "superblock": true, 00:11:31.900 "num_base_bdevs": 2, 00:11:31.900 "num_base_bdevs_discovered": 2, 00:11:31.900 "num_base_bdevs_operational": 2, 00:11:31.900 "base_bdevs_list": [ 00:11:31.900 { 00:11:31.900 "name": "BaseBdev1", 00:11:31.900 "uuid": "727e3de9-3372-436b-b1d5-64342fb7d83d", 00:11:31.900 "is_configured": true, 00:11:31.900 "data_offset": 2048, 00:11:31.900 "data_size": 63488 00:11:31.900 }, 00:11:31.900 { 00:11:31.900 "name": "BaseBdev2", 00:11:31.900 "uuid": "532e816b-8937-462b-863a-af220b835705", 00:11:31.900 "is_configured": true, 00:11:31.900 "data_offset": 2048, 00:11:31.900 "data_size": 63488 00:11:31.900 } 00:11:31.900 ] 00:11:31.900 }' 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.900 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.465 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.466 [2024-12-05 19:31:25.692735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.466 "name": "Existed_Raid", 00:11:32.466 "aliases": [ 00:11:32.466 "bf607f63-bf82-41c6-851c-e1d22bc0e474" 00:11:32.466 ], 00:11:32.466 "product_name": "Raid Volume", 00:11:32.466 "block_size": 512, 00:11:32.466 "num_blocks": 63488, 00:11:32.466 "uuid": "bf607f63-bf82-41c6-851c-e1d22bc0e474", 00:11:32.466 "assigned_rate_limits": { 00:11:32.466 "rw_ios_per_sec": 0, 00:11:32.466 "rw_mbytes_per_sec": 0, 00:11:32.466 "r_mbytes_per_sec": 0, 00:11:32.466 "w_mbytes_per_sec": 0 00:11:32.466 }, 00:11:32.466 "claimed": false, 00:11:32.466 "zoned": false, 00:11:32.466 "supported_io_types": { 00:11:32.466 "read": true, 00:11:32.466 "write": true, 00:11:32.466 "unmap": false, 00:11:32.466 "flush": false, 00:11:32.466 "reset": true, 00:11:32.466 "nvme_admin": false, 00:11:32.466 "nvme_io": false, 00:11:32.466 "nvme_io_md": false, 00:11:32.466 "write_zeroes": true, 00:11:32.466 "zcopy": false, 00:11:32.466 "get_zone_info": false, 00:11:32.466 "zone_management": false, 00:11:32.466 "zone_append": false, 00:11:32.466 "compare": false, 00:11:32.466 "compare_and_write": false, 00:11:32.466 "abort": false, 00:11:32.466 "seek_hole": false, 00:11:32.466 "seek_data": false, 00:11:32.466 "copy": false, 00:11:32.466 "nvme_iov_md": false 00:11:32.466 }, 00:11:32.466 "memory_domains": [ 00:11:32.466 { 00:11:32.466 "dma_device_id": "system", 00:11:32.466 "dma_device_type": 1 00:11:32.466 }, 00:11:32.466 { 00:11:32.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.466 "dma_device_type": 2 00:11:32.466 }, 00:11:32.466 { 00:11:32.466 "dma_device_id": "system", 00:11:32.466 "dma_device_type": 1 00:11:32.466 }, 00:11:32.466 { 00:11:32.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.466 "dma_device_type": 2 00:11:32.466 } 00:11:32.466 ], 00:11:32.466 "driver_specific": { 00:11:32.466 "raid": { 00:11:32.466 "uuid": "bf607f63-bf82-41c6-851c-e1d22bc0e474", 00:11:32.466 "strip_size_kb": 0, 00:11:32.466 "state": "online", 00:11:32.466 "raid_level": "raid1", 00:11:32.466 "superblock": true, 00:11:32.466 "num_base_bdevs": 2, 00:11:32.466 "num_base_bdevs_discovered": 2, 00:11:32.466 "num_base_bdevs_operational": 2, 00:11:32.466 "base_bdevs_list": [ 00:11:32.466 { 00:11:32.466 "name": "BaseBdev1", 00:11:32.466 "uuid": "727e3de9-3372-436b-b1d5-64342fb7d83d", 00:11:32.466 "is_configured": true, 00:11:32.466 "data_offset": 2048, 00:11:32.466 "data_size": 63488 00:11:32.466 }, 00:11:32.466 { 00:11:32.466 "name": "BaseBdev2", 00:11:32.466 "uuid": "532e816b-8937-462b-863a-af220b835705", 00:11:32.466 "is_configured": true, 00:11:32.466 "data_offset": 2048, 00:11:32.466 "data_size": 63488 00:11:32.466 } 00:11:32.466 ] 00:11:32.466 } 00:11:32.466 } 00:11:32.466 }' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:32.466 BaseBdev2' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.466 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.725 19:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.725 [2024-12-05 19:31:25.960545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.725 "name": "Existed_Raid", 00:11:32.725 "uuid": "bf607f63-bf82-41c6-851c-e1d22bc0e474", 00:11:32.725 "strip_size_kb": 0, 00:11:32.725 "state": "online", 00:11:32.725 "raid_level": "raid1", 00:11:32.725 "superblock": true, 00:11:32.725 "num_base_bdevs": 2, 00:11:32.725 "num_base_bdevs_discovered": 1, 00:11:32.725 "num_base_bdevs_operational": 1, 00:11:32.725 "base_bdevs_list": [ 00:11:32.725 { 00:11:32.725 "name": null, 00:11:32.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.725 "is_configured": false, 00:11:32.725 "data_offset": 0, 00:11:32.725 "data_size": 63488 00:11:32.725 }, 00:11:32.725 { 00:11:32.725 "name": "BaseBdev2", 00:11:32.725 "uuid": "532e816b-8937-462b-863a-af220b835705", 00:11:32.725 "is_configured": true, 00:11:32.725 "data_offset": 2048, 00:11:32.725 "data_size": 63488 00:11:32.725 } 00:11:32.725 ] 00:11:32.725 }' 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.725 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 [2024-12-05 19:31:26.628583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.292 [2024-12-05 19:31:26.628775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.292 [2024-12-05 19:31:26.715094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.292 [2024-12-05 19:31:26.715166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.292 [2024-12-05 19:31:26.715186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.551 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:33.551 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:33.551 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:33.551 19:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62919 00:11:33.551 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62919 ']' 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62919 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62919 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.552 killing process with pid 62919 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62919' 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62919 00:11:33.552 [2024-12-05 19:31:26.808451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.552 19:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62919 00:11:33.552 [2024-12-05 19:31:26.823467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.489 19:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.489 00:11:34.489 real 0m5.521s 00:11:34.489 user 0m8.365s 00:11:34.489 sys 0m0.755s 00:11:34.489 19:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.489 19:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.489 ************************************ 00:11:34.490 END TEST raid_state_function_test_sb 00:11:34.490 ************************************ 00:11:34.490 19:31:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:34.490 19:31:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.490 19:31:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.490 19:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.490 ************************************ 00:11:34.490 START TEST raid_superblock_test 00:11:34.490 ************************************ 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63171 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63171 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63171 ']' 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.748 19:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 [2024-12-05 19:31:28.040207] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:34.748 [2024-12-05 19:31:28.040397] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:11:35.008 [2024-12-05 19:31:28.223143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.008 [2024-12-05 19:31:28.355910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.268 [2024-12-05 19:31:28.557467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.268 [2024-12-05 19:31:28.557546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.840 malloc1 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.840 [2024-12-05 19:31:29.056763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.840 [2024-12-05 19:31:29.056874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.840 [2024-12-05 19:31:29.056906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.840 [2024-12-05 19:31:29.056920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.840 [2024-12-05 19:31:29.059819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.840 [2024-12-05 19:31:29.059865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.840 pt1 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.840 malloc2 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.840 [2024-12-05 19:31:29.105652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.840 [2024-12-05 19:31:29.105775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.840 [2024-12-05 19:31:29.105814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.840 [2024-12-05 19:31:29.105829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.840 [2024-12-05 19:31:29.108603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.840 [2024-12-05 19:31:29.108660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.840 pt2 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.840 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.841 [2024-12-05 19:31:29.113661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.841 [2024-12-05 19:31:29.116145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.841 [2024-12-05 19:31:29.116434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.841 [2024-12-05 19:31:29.116457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.841 [2024-12-05 19:31:29.116774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:35.841 [2024-12-05 19:31:29.116978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.841 [2024-12-05 19:31:29.117014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.841 [2024-12-05 19:31:29.117190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.841 "name": "raid_bdev1", 00:11:35.841 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:35.841 "strip_size_kb": 0, 00:11:35.841 "state": "online", 00:11:35.841 "raid_level": "raid1", 00:11:35.841 "superblock": true, 00:11:35.841 "num_base_bdevs": 2, 00:11:35.841 "num_base_bdevs_discovered": 2, 00:11:35.841 "num_base_bdevs_operational": 2, 00:11:35.841 "base_bdevs_list": [ 00:11:35.841 { 00:11:35.841 "name": "pt1", 00:11:35.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.841 "is_configured": true, 00:11:35.841 "data_offset": 2048, 00:11:35.841 "data_size": 63488 00:11:35.841 }, 00:11:35.841 { 00:11:35.841 "name": "pt2", 00:11:35.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.841 "is_configured": true, 00:11:35.841 "data_offset": 2048, 00:11:35.841 "data_size": 63488 00:11:35.841 } 00:11:35.841 ] 00:11:35.841 }' 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.841 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.411 [2024-12-05 19:31:29.662239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.411 "name": "raid_bdev1", 00:11:36.411 "aliases": [ 00:11:36.411 "6999c056-f846-4e55-a259-06ae83e9c353" 00:11:36.411 ], 00:11:36.411 "product_name": "Raid Volume", 00:11:36.411 "block_size": 512, 00:11:36.411 "num_blocks": 63488, 00:11:36.411 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:36.411 "assigned_rate_limits": { 00:11:36.411 "rw_ios_per_sec": 0, 00:11:36.411 "rw_mbytes_per_sec": 0, 00:11:36.411 "r_mbytes_per_sec": 0, 00:11:36.411 "w_mbytes_per_sec": 0 00:11:36.411 }, 00:11:36.411 "claimed": false, 00:11:36.411 "zoned": false, 00:11:36.411 "supported_io_types": { 00:11:36.411 "read": true, 00:11:36.411 "write": true, 00:11:36.411 "unmap": false, 00:11:36.411 "flush": false, 00:11:36.411 "reset": true, 00:11:36.411 "nvme_admin": false, 00:11:36.411 "nvme_io": false, 00:11:36.411 "nvme_io_md": false, 00:11:36.411 "write_zeroes": true, 00:11:36.411 "zcopy": false, 00:11:36.411 "get_zone_info": false, 00:11:36.411 "zone_management": false, 00:11:36.411 "zone_append": false, 00:11:36.411 "compare": false, 00:11:36.411 "compare_and_write": false, 00:11:36.411 "abort": false, 00:11:36.411 "seek_hole": false, 00:11:36.411 "seek_data": false, 00:11:36.411 "copy": false, 00:11:36.411 "nvme_iov_md": false 00:11:36.411 }, 00:11:36.411 "memory_domains": [ 00:11:36.411 { 00:11:36.411 "dma_device_id": "system", 00:11:36.411 "dma_device_type": 1 00:11:36.411 }, 00:11:36.411 { 00:11:36.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.411 "dma_device_type": 2 00:11:36.411 }, 00:11:36.411 { 00:11:36.411 "dma_device_id": "system", 00:11:36.411 "dma_device_type": 1 00:11:36.411 }, 00:11:36.411 { 00:11:36.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.411 "dma_device_type": 2 00:11:36.411 } 00:11:36.411 ], 00:11:36.411 "driver_specific": { 00:11:36.411 "raid": { 00:11:36.411 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:36.411 "strip_size_kb": 0, 00:11:36.411 "state": "online", 00:11:36.411 "raid_level": "raid1", 00:11:36.411 "superblock": true, 00:11:36.411 "num_base_bdevs": 2, 00:11:36.411 "num_base_bdevs_discovered": 2, 00:11:36.411 "num_base_bdevs_operational": 2, 00:11:36.411 "base_bdevs_list": [ 00:11:36.411 { 00:11:36.411 "name": "pt1", 00:11:36.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.411 "is_configured": true, 00:11:36.411 "data_offset": 2048, 00:11:36.411 "data_size": 63488 00:11:36.411 }, 00:11:36.411 { 00:11:36.411 "name": "pt2", 00:11:36.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.411 "is_configured": true, 00:11:36.411 "data_offset": 2048, 00:11:36.411 "data_size": 63488 00:11:36.411 } 00:11:36.411 ] 00:11:36.411 } 00:11:36.411 } 00:11:36.411 }' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.411 pt2' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.411 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 [2024-12-05 19:31:29.938279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6999c056-f846-4e55-a259-06ae83e9c353 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6999c056-f846-4e55-a259-06ae83e9c353 ']' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 [2024-12-05 19:31:29.981948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.670 [2024-12-05 19:31:29.981993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.670 [2024-12-05 19:31:29.982106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.670 [2024-12-05 19:31:29.982205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.670 [2024-12-05 19:31:29.982227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:36.670 19:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.670 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.928 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.929 [2024-12-05 19:31:30.113992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:36.929 [2024-12-05 19:31:30.116481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:36.929 [2024-12-05 19:31:30.116717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:36.929 [2024-12-05 19:31:30.116800] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:36.929 [2024-12-05 19:31:30.116827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.929 [2024-12-05 19:31:30.116842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:36.929 request: 00:11:36.929 { 00:11:36.929 "name": "raid_bdev1", 00:11:36.929 "raid_level": "raid1", 00:11:36.929 "base_bdevs": [ 00:11:36.929 "malloc1", 00:11:36.929 "malloc2" 00:11:36.929 ], 00:11:36.929 "superblock": false, 00:11:36.929 "method": "bdev_raid_create", 00:11:36.929 "req_id": 1 00:11:36.929 } 00:11:36.929 Got JSON-RPC error response 00:11:36.929 response: 00:11:36.929 { 00:11:36.929 "code": -17, 00:11:36.929 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:36.929 } 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.929 [2024-12-05 19:31:30.177997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.929 [2024-12-05 19:31:30.178185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.929 [2024-12-05 19:31:30.178258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:36.929 [2024-12-05 19:31:30.178486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.929 [2024-12-05 19:31:30.181446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.929 [2024-12-05 19:31:30.181620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.929 [2024-12-05 19:31:30.181841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.929 [2024-12-05 19:31:30.182043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.929 pt1 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.929 "name": "raid_bdev1", 00:11:36.929 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:36.929 "strip_size_kb": 0, 00:11:36.929 "state": "configuring", 00:11:36.929 "raid_level": "raid1", 00:11:36.929 "superblock": true, 00:11:36.929 "num_base_bdevs": 2, 00:11:36.929 "num_base_bdevs_discovered": 1, 00:11:36.929 "num_base_bdevs_operational": 2, 00:11:36.929 "base_bdevs_list": [ 00:11:36.929 { 00:11:36.929 "name": "pt1", 00:11:36.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.929 "is_configured": true, 00:11:36.929 "data_offset": 2048, 00:11:36.929 "data_size": 63488 00:11:36.929 }, 00:11:36.929 { 00:11:36.929 "name": null, 00:11:36.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.929 "is_configured": false, 00:11:36.929 "data_offset": 2048, 00:11:36.929 "data_size": 63488 00:11:36.929 } 00:11:36.929 ] 00:11:36.929 }' 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.929 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.497 [2024-12-05 19:31:30.722583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.497 [2024-12-05 19:31:30.722820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.497 [2024-12-05 19:31:30.723016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:37.497 [2024-12-05 19:31:30.723142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.497 [2024-12-05 19:31:30.723788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.497 [2024-12-05 19:31:30.723833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.497 [2024-12-05 19:31:30.723935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.497 [2024-12-05 19:31:30.723975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.497 [2024-12-05 19:31:30.724120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.497 [2024-12-05 19:31:30.724157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.497 [2024-12-05 19:31:30.724464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:37.497 pt2 00:11:37.497 [2024-12-05 19:31:30.724768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.497 [2024-12-05 19:31:30.724790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:37.497 [2024-12-05 19:31:30.724963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.497 "name": "raid_bdev1", 00:11:37.497 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:37.497 "strip_size_kb": 0, 00:11:37.497 "state": "online", 00:11:37.497 "raid_level": "raid1", 00:11:37.497 "superblock": true, 00:11:37.497 "num_base_bdevs": 2, 00:11:37.497 "num_base_bdevs_discovered": 2, 00:11:37.497 "num_base_bdevs_operational": 2, 00:11:37.497 "base_bdevs_list": [ 00:11:37.497 { 00:11:37.497 "name": "pt1", 00:11:37.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.497 "is_configured": true, 00:11:37.497 "data_offset": 2048, 00:11:37.497 "data_size": 63488 00:11:37.497 }, 00:11:37.497 { 00:11:37.497 "name": "pt2", 00:11:37.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.497 "is_configured": true, 00:11:37.497 "data_offset": 2048, 00:11:37.497 "data_size": 63488 00:11:37.497 } 00:11:37.497 ] 00:11:37.497 }' 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.497 19:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 [2024-12-05 19:31:31.247028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.064 "name": "raid_bdev1", 00:11:38.064 "aliases": [ 00:11:38.064 "6999c056-f846-4e55-a259-06ae83e9c353" 00:11:38.064 ], 00:11:38.064 "product_name": "Raid Volume", 00:11:38.064 "block_size": 512, 00:11:38.064 "num_blocks": 63488, 00:11:38.064 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:38.064 "assigned_rate_limits": { 00:11:38.064 "rw_ios_per_sec": 0, 00:11:38.064 "rw_mbytes_per_sec": 0, 00:11:38.064 "r_mbytes_per_sec": 0, 00:11:38.064 "w_mbytes_per_sec": 0 00:11:38.064 }, 00:11:38.064 "claimed": false, 00:11:38.064 "zoned": false, 00:11:38.064 "supported_io_types": { 00:11:38.064 "read": true, 00:11:38.064 "write": true, 00:11:38.064 "unmap": false, 00:11:38.064 "flush": false, 00:11:38.064 "reset": true, 00:11:38.064 "nvme_admin": false, 00:11:38.064 "nvme_io": false, 00:11:38.064 "nvme_io_md": false, 00:11:38.064 "write_zeroes": true, 00:11:38.064 "zcopy": false, 00:11:38.064 "get_zone_info": false, 00:11:38.064 "zone_management": false, 00:11:38.064 "zone_append": false, 00:11:38.064 "compare": false, 00:11:38.064 "compare_and_write": false, 00:11:38.064 "abort": false, 00:11:38.064 "seek_hole": false, 00:11:38.064 "seek_data": false, 00:11:38.064 "copy": false, 00:11:38.064 "nvme_iov_md": false 00:11:38.064 }, 00:11:38.064 "memory_domains": [ 00:11:38.064 { 00:11:38.064 "dma_device_id": "system", 00:11:38.064 "dma_device_type": 1 00:11:38.064 }, 00:11:38.064 { 00:11:38.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.064 "dma_device_type": 2 00:11:38.064 }, 00:11:38.064 { 00:11:38.064 "dma_device_id": "system", 00:11:38.064 "dma_device_type": 1 00:11:38.064 }, 00:11:38.064 { 00:11:38.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.064 "dma_device_type": 2 00:11:38.064 } 00:11:38.064 ], 00:11:38.064 "driver_specific": { 00:11:38.064 "raid": { 00:11:38.064 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:38.064 "strip_size_kb": 0, 00:11:38.064 "state": "online", 00:11:38.064 "raid_level": "raid1", 00:11:38.064 "superblock": true, 00:11:38.064 "num_base_bdevs": 2, 00:11:38.064 "num_base_bdevs_discovered": 2, 00:11:38.064 "num_base_bdevs_operational": 2, 00:11:38.064 "base_bdevs_list": [ 00:11:38.064 { 00:11:38.064 "name": "pt1", 00:11:38.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.064 "is_configured": true, 00:11:38.064 "data_offset": 2048, 00:11:38.064 "data_size": 63488 00:11:38.064 }, 00:11:38.064 { 00:11:38.064 "name": "pt2", 00:11:38.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.064 "is_configured": true, 00:11:38.064 "data_offset": 2048, 00:11:38.064 "data_size": 63488 00:11:38.064 } 00:11:38.064 ] 00:11:38.064 } 00:11:38.064 } 00:11:38.064 }' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.064 pt2' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.064 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.322 [2024-12-05 19:31:31.511088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6999c056-f846-4e55-a259-06ae83e9c353 '!=' 6999c056-f846-4e55-a259-06ae83e9c353 ']' 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.322 [2024-12-05 19:31:31.562848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.322 "name": "raid_bdev1", 00:11:38.322 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:38.322 "strip_size_kb": 0, 00:11:38.322 "state": "online", 00:11:38.322 "raid_level": "raid1", 00:11:38.322 "superblock": true, 00:11:38.322 "num_base_bdevs": 2, 00:11:38.322 "num_base_bdevs_discovered": 1, 00:11:38.322 "num_base_bdevs_operational": 1, 00:11:38.322 "base_bdevs_list": [ 00:11:38.322 { 00:11:38.322 "name": null, 00:11:38.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.322 "is_configured": false, 00:11:38.322 "data_offset": 0, 00:11:38.322 "data_size": 63488 00:11:38.322 }, 00:11:38.322 { 00:11:38.322 "name": "pt2", 00:11:38.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.322 "is_configured": true, 00:11:38.322 "data_offset": 2048, 00:11:38.322 "data_size": 63488 00:11:38.322 } 00:11:38.322 ] 00:11:38.322 }' 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.322 19:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 [2024-12-05 19:31:32.059020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.694 [2024-12-05 19:31:32.059448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.694 [2024-12-05 19:31:32.059591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.694 [2024-12-05 19:31:32.059684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.694 [2024-12-05 19:31:32.059705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.694 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 [2024-12-05 19:31:32.126981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.952 [2024-12-05 19:31:32.127196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.952 [2024-12-05 19:31:32.127264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.952 [2024-12-05 19:31:32.127392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.952 [2024-12-05 19:31:32.130346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.952 [2024-12-05 19:31:32.130524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.952 [2024-12-05 19:31:32.130799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.952 [2024-12-05 19:31:32.130972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.952 [2024-12-05 19:31:32.131143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:38.952 [2024-12-05 19:31:32.131255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.952 [2024-12-05 19:31:32.131583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.952 [2024-12-05 19:31:32.131921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:38.952 pt2 00:11:38.952 [2024-12-05 19:31:32.132040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:38.952 [2024-12-05 19:31:32.132267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.952 "name": "raid_bdev1", 00:11:38.952 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:38.952 "strip_size_kb": 0, 00:11:38.952 "state": "online", 00:11:38.952 "raid_level": "raid1", 00:11:38.952 "superblock": true, 00:11:38.952 "num_base_bdevs": 2, 00:11:38.952 "num_base_bdevs_discovered": 1, 00:11:38.952 "num_base_bdevs_operational": 1, 00:11:38.952 "base_bdevs_list": [ 00:11:38.952 { 00:11:38.952 "name": null, 00:11:38.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.952 "is_configured": false, 00:11:38.952 "data_offset": 2048, 00:11:38.952 "data_size": 63488 00:11:38.952 }, 00:11:38.952 { 00:11:38.952 "name": "pt2", 00:11:38.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.952 "is_configured": true, 00:11:38.952 "data_offset": 2048, 00:11:38.952 "data_size": 63488 00:11:38.952 } 00:11:38.952 ] 00:11:38.952 }' 00:11:38.952 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.953 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.210 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.210 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 [2024-12-05 19:31:32.643467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.210 [2024-12-05 19:31:32.643656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.210 [2024-12-05 19:31:32.643786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.210 [2024-12-05 19:31:32.643857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.210 [2024-12-05 19:31:32.643873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:39.210 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.467 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.467 [2024-12-05 19:31:32.707477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.467 [2024-12-05 19:31:32.707672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.467 [2024-12-05 19:31:32.707766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:39.467 [2024-12-05 19:31:32.708025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.467 [2024-12-05 19:31:32.711259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.467 [2024-12-05 19:31:32.711441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.467 [2024-12-05 19:31:32.711664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:39.467 [2024-12-05 19:31:32.711873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.468 [2024-12-05 19:31:32.712241] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:11:39.468 than existing raid bdev raid_bdev1 (2) 00:11:39.468 [2024-12-05 19:31:32.712399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.468 [2024-12-05 19:31:32.712436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:39.468 [2024-12-05 19:31:32.712506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.468 [2024-12-05 19:31:32.712615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:39.468 [2024-12-05 19:31:32.712630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.468 [2024-12-05 19:31:32.712959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.468 [2024-12-05 19:31:32.713176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:39.468 [2024-12-05 19:31:32.713198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:39.468 [2024-12-05 19:31:32.713385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.468 "name": "raid_bdev1", 00:11:39.468 "uuid": "6999c056-f846-4e55-a259-06ae83e9c353", 00:11:39.468 "strip_size_kb": 0, 00:11:39.468 "state": "online", 00:11:39.468 "raid_level": "raid1", 00:11:39.468 "superblock": true, 00:11:39.468 "num_base_bdevs": 2, 00:11:39.468 "num_base_bdevs_discovered": 1, 00:11:39.468 "num_base_bdevs_operational": 1, 00:11:39.468 "base_bdevs_list": [ 00:11:39.468 { 00:11:39.468 "name": null, 00:11:39.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.468 "is_configured": false, 00:11:39.468 "data_offset": 2048, 00:11:39.468 "data_size": 63488 00:11:39.468 }, 00:11:39.468 { 00:11:39.468 "name": "pt2", 00:11:39.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.468 "is_configured": true, 00:11:39.468 "data_offset": 2048, 00:11:39.468 "data_size": 63488 00:11:39.468 } 00:11:39.468 ] 00:11:39.468 }' 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.468 19:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:40.034 [2024-12-05 19:31:33.300151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6999c056-f846-4e55-a259-06ae83e9c353 '!=' 6999c056-f846-4e55-a259-06ae83e9c353 ']' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63171 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63171 ']' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63171 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63171 00:11:40.034 killing process with pid 63171 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63171' 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63171 00:11:40.034 [2024-12-05 19:31:33.386011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.034 19:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63171 00:11:40.034 [2024-12-05 19:31:33.386112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.034 [2024-12-05 19:31:33.386184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.034 [2024-12-05 19:31:33.386206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:40.293 [2024-12-05 19:31:33.571200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.227 ************************************ 00:11:41.227 END TEST raid_superblock_test 00:11:41.227 ************************************ 00:11:41.227 19:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:41.227 00:11:41.227 real 0m6.691s 00:11:41.227 user 0m10.609s 00:11:41.227 sys 0m0.965s 00:11:41.227 19:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.227 19:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.227 19:31:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:41.227 19:31:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:41.227 19:31:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.227 19:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.485 ************************************ 00:11:41.485 START TEST raid_read_error_test 00:11:41.485 ************************************ 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:41.485 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AKzJaqAdwy 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63508 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63508 00:11:41.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63508 ']' 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.486 19:31:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.486 [2024-12-05 19:31:34.795684] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:41.486 [2024-12-05 19:31:34.795947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63508 ] 00:11:41.744 [2024-12-05 19:31:34.983920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.744 [2024-12-05 19:31:35.139018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.002 [2024-12-05 19:31:35.348863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.002 [2024-12-05 19:31:35.348946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.566 BaseBdev1_malloc 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.566 true 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.566 [2024-12-05 19:31:35.822608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:42.566 [2024-12-05 19:31:35.822852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.566 [2024-12-05 19:31:35.822895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:42.566 [2024-12-05 19:31:35.822915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.566 [2024-12-05 19:31:35.825822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.566 [2024-12-05 19:31:35.825877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.566 BaseBdev1 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.566 BaseBdev2_malloc 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.566 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.567 true 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.567 [2024-12-05 19:31:35.880040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:42.567 [2024-12-05 19:31:35.880240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.567 [2024-12-05 19:31:35.880312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:42.567 [2024-12-05 19:31:35.880513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.567 [2024-12-05 19:31:35.883320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.567 [2024-12-05 19:31:35.883373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.567 BaseBdev2 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.567 [2024-12-05 19:31:35.888182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.567 [2024-12-05 19:31:35.890644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.567 [2024-12-05 19:31:35.891075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.567 [2024-12-05 19:31:35.891107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.567 [2024-12-05 19:31:35.891411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:42.567 [2024-12-05 19:31:35.891691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.567 [2024-12-05 19:31:35.891709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:42.567 [2024-12-05 19:31:35.891926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.567 "name": "raid_bdev1", 00:11:42.567 "uuid": "bb71145f-2270-41a7-a1d9-331746c53b98", 00:11:42.567 "strip_size_kb": 0, 00:11:42.567 "state": "online", 00:11:42.567 "raid_level": "raid1", 00:11:42.567 "superblock": true, 00:11:42.567 "num_base_bdevs": 2, 00:11:42.567 "num_base_bdevs_discovered": 2, 00:11:42.567 "num_base_bdevs_operational": 2, 00:11:42.567 "base_bdevs_list": [ 00:11:42.567 { 00:11:42.567 "name": "BaseBdev1", 00:11:42.567 "uuid": "c72a2131-b170-5d07-b47c-9b3570869bf0", 00:11:42.567 "is_configured": true, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 }, 00:11:42.567 { 00:11:42.567 "name": "BaseBdev2", 00:11:42.567 "uuid": "d85f3e34-3d66-53c1-ad71-232810947207", 00:11:42.567 "is_configured": true, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 } 00:11:42.567 ] 00:11:42.567 }' 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.567 19:31:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.132 19:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:43.132 19:31:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:43.132 [2024-12-05 19:31:36.533735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.067 "name": "raid_bdev1", 00:11:44.067 "uuid": "bb71145f-2270-41a7-a1d9-331746c53b98", 00:11:44.067 "strip_size_kb": 0, 00:11:44.067 "state": "online", 00:11:44.067 "raid_level": "raid1", 00:11:44.067 "superblock": true, 00:11:44.067 "num_base_bdevs": 2, 00:11:44.067 "num_base_bdevs_discovered": 2, 00:11:44.067 "num_base_bdevs_operational": 2, 00:11:44.067 "base_bdevs_list": [ 00:11:44.067 { 00:11:44.067 "name": "BaseBdev1", 00:11:44.067 "uuid": "c72a2131-b170-5d07-b47c-9b3570869bf0", 00:11:44.067 "is_configured": true, 00:11:44.067 "data_offset": 2048, 00:11:44.067 "data_size": 63488 00:11:44.067 }, 00:11:44.067 { 00:11:44.067 "name": "BaseBdev2", 00:11:44.067 "uuid": "d85f3e34-3d66-53c1-ad71-232810947207", 00:11:44.067 "is_configured": true, 00:11:44.067 "data_offset": 2048, 00:11:44.067 "data_size": 63488 00:11:44.067 } 00:11:44.067 ] 00:11:44.067 }' 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.067 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 [2024-12-05 19:31:37.931475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.637 [2024-12-05 19:31:37.931736] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.637 [2024-12-05 19:31:37.935347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.637 [2024-12-05 19:31:37.935763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.637 { 00:11:44.637 "results": [ 00:11:44.637 { 00:11:44.637 "job": "raid_bdev1", 00:11:44.637 "core_mask": "0x1", 00:11:44.637 "workload": "randrw", 00:11:44.637 "percentage": 50, 00:11:44.637 "status": "finished", 00:11:44.637 "queue_depth": 1, 00:11:44.637 "io_size": 131072, 00:11:44.637 "runtime": 1.395878, 00:11:44.637 "iops": 12148.626169335716, 00:11:44.637 "mibps": 1518.5782711669644, 00:11:44.637 "io_failed": 0, 00:11:44.637 "io_timeout": 0, 00:11:44.637 "avg_latency_us": 77.97541133710023, 00:11:44.637 "min_latency_us": 43.054545454545455, 00:11:44.637 "max_latency_us": 1832.0290909090909 00:11:44.637 } 00:11:44.637 ], 00:11:44.637 "core_count": 1 00:11:44.637 } 00:11:44.637 [2024-12-05 19:31:37.936010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.637 [2024-12-05 19:31:37.936058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63508 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63508 ']' 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63508 00:11:44.637 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63508 00:11:44.638 killing process with pid 63508 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63508' 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63508 00:11:44.638 [2024-12-05 19:31:37.973794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.638 19:31:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63508 00:11:44.897 [2024-12-05 19:31:38.101481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AKzJaqAdwy 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:45.882 00:11:45.882 real 0m4.571s 00:11:45.882 user 0m5.722s 00:11:45.882 sys 0m0.531s 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.882 ************************************ 00:11:45.882 END TEST raid_read_error_test 00:11:45.882 ************************************ 00:11:45.882 19:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.882 19:31:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:45.882 19:31:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.882 19:31:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.882 19:31:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.882 ************************************ 00:11:45.882 START TEST raid_write_error_test 00:11:45.882 ************************************ 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.882 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pKdHxgQpqo 00:11:45.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63653 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63653 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63653 ']' 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.883 19:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.141 [2024-12-05 19:31:39.403025] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:46.141 [2024-12-05 19:31:39.403460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63653 ] 00:11:46.141 [2024-12-05 19:31:39.577527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.401 [2024-12-05 19:31:39.706280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.660 [2024-12-05 19:31:39.913331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.660 [2024-12-05 19:31:39.913432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 BaseBdev1_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 true 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 [2024-12-05 19:31:40.473764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:47.230 [2024-12-05 19:31:40.473987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.230 [2024-12-05 19:31:40.474140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:47.230 [2024-12-05 19:31:40.474273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.230 [2024-12-05 19:31:40.477206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.230 BaseBdev1 00:11:47.230 [2024-12-05 19:31:40.477405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 BaseBdev2_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 true 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 [2024-12-05 19:31:40.530415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:47.230 [2024-12-05 19:31:40.530624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.230 [2024-12-05 19:31:40.530773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:47.230 [2024-12-05 19:31:40.530893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.230 [2024-12-05 19:31:40.533801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.230 [2024-12-05 19:31:40.533852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.230 BaseBdev2 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.230 [2024-12-05 19:31:40.538497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.230 [2024-12-05 19:31:40.541185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.230 [2024-12-05 19:31:40.541587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.230 [2024-12-05 19:31:40.541619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.230 [2024-12-05 19:31:40.541950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:47.230 [2024-12-05 19:31:40.542196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.230 [2024-12-05 19:31:40.542214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.230 [2024-12-05 19:31:40.542520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.230 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.231 "name": "raid_bdev1", 00:11:47.231 "uuid": "2f11bdfd-3362-4883-b8be-0af6fc03869b", 00:11:47.231 "strip_size_kb": 0, 00:11:47.231 "state": "online", 00:11:47.231 "raid_level": "raid1", 00:11:47.231 "superblock": true, 00:11:47.231 "num_base_bdevs": 2, 00:11:47.231 "num_base_bdevs_discovered": 2, 00:11:47.231 "num_base_bdevs_operational": 2, 00:11:47.231 "base_bdevs_list": [ 00:11:47.231 { 00:11:47.231 "name": "BaseBdev1", 00:11:47.231 "uuid": "2e422edc-d5cf-50da-b630-f2fd90c46942", 00:11:47.231 "is_configured": true, 00:11:47.231 "data_offset": 2048, 00:11:47.231 "data_size": 63488 00:11:47.231 }, 00:11:47.231 { 00:11:47.231 "name": "BaseBdev2", 00:11:47.231 "uuid": "75be4c71-3927-53ef-b164-8734ed39dc27", 00:11:47.231 "is_configured": true, 00:11:47.231 "data_offset": 2048, 00:11:47.231 "data_size": 63488 00:11:47.231 } 00:11:47.231 ] 00:11:47.231 }' 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.231 19:31:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.799 19:31:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.799 19:31:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.799 [2024-12-05 19:31:41.144241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.736 [2024-12-05 19:31:42.028808] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:48.736 [2024-12-05 19:31:42.028879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.736 [2024-12-05 19:31:42.029123] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.736 "name": "raid_bdev1", 00:11:48.736 "uuid": "2f11bdfd-3362-4883-b8be-0af6fc03869b", 00:11:48.736 "strip_size_kb": 0, 00:11:48.736 "state": "online", 00:11:48.736 "raid_level": "raid1", 00:11:48.736 "superblock": true, 00:11:48.736 "num_base_bdevs": 2, 00:11:48.736 "num_base_bdevs_discovered": 1, 00:11:48.736 "num_base_bdevs_operational": 1, 00:11:48.736 "base_bdevs_list": [ 00:11:48.736 { 00:11:48.736 "name": null, 00:11:48.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.736 "is_configured": false, 00:11:48.736 "data_offset": 0, 00:11:48.736 "data_size": 63488 00:11:48.736 }, 00:11:48.736 { 00:11:48.736 "name": "BaseBdev2", 00:11:48.736 "uuid": "75be4c71-3927-53ef-b164-8734ed39dc27", 00:11:48.736 "is_configured": true, 00:11:48.736 "data_offset": 2048, 00:11:48.736 "data_size": 63488 00:11:48.736 } 00:11:48.736 ] 00:11:48.736 }' 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.736 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.303 [2024-12-05 19:31:42.584111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.303 [2024-12-05 19:31:42.584304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.303 [2024-12-05 19:31:42.587657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.303 { 00:11:49.303 "results": [ 00:11:49.303 { 00:11:49.303 "job": "raid_bdev1", 00:11:49.303 "core_mask": "0x1", 00:11:49.303 "workload": "randrw", 00:11:49.303 "percentage": 50, 00:11:49.303 "status": "finished", 00:11:49.303 "queue_depth": 1, 00:11:49.303 "io_size": 131072, 00:11:49.303 "runtime": 1.437562, 00:11:49.303 "iops": 14444.594389668064, 00:11:49.303 "mibps": 1805.574298708508, 00:11:49.303 "io_failed": 0, 00:11:49.303 "io_timeout": 0, 00:11:49.303 "avg_latency_us": 64.66895781800669, 00:11:49.303 "min_latency_us": 39.33090909090909, 00:11:49.303 "max_latency_us": 1884.16 00:11:49.303 } 00:11:49.303 ], 00:11:49.303 "core_count": 1 00:11:49.303 } 00:11:49.303 [2024-12-05 19:31:42.587873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.303 [2024-12-05 19:31:42.587976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.303 [2024-12-05 19:31:42.587997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63653 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63653 ']' 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63653 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63653 00:11:49.303 killing process with pid 63653 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63653' 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63653 00:11:49.303 [2024-12-05 19:31:42.625115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.303 19:31:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63653 00:11:49.562 [2024-12-05 19:31:42.746018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pKdHxgQpqo 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.498 00:11:50.498 real 0m4.577s 00:11:50.498 user 0m5.730s 00:11:50.498 sys 0m0.569s 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.498 ************************************ 00:11:50.498 END TEST raid_write_error_test 00:11:50.498 ************************************ 00:11:50.498 19:31:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.498 19:31:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:50.498 19:31:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:50.498 19:31:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:50.498 19:31:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.498 19:31:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.498 19:31:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.498 ************************************ 00:11:50.498 START TEST raid_state_function_test 00:11:50.498 ************************************ 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63797 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63797' 00:11:50.498 Process raid pid: 63797 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63797 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63797 ']' 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.498 19:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.757 [2024-12-05 19:31:44.040009] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:11:50.757 [2024-12-05 19:31:44.040993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.016 [2024-12-05 19:31:44.233375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.016 [2024-12-05 19:31:44.366478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.275 [2024-12-05 19:31:44.576463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.275 [2024-12-05 19:31:44.576718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 [2024-12-05 19:31:45.057034] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.842 [2024-12-05 19:31:45.057107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.842 [2024-12-05 19:31:45.057125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.842 [2024-12-05 19:31:45.057143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.842 [2024-12-05 19:31:45.057154] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.842 [2024-12-05 19:31:45.057169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.842 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.842 "name": "Existed_Raid", 00:11:51.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.842 "strip_size_kb": 64, 00:11:51.842 "state": "configuring", 00:11:51.842 "raid_level": "raid0", 00:11:51.842 "superblock": false, 00:11:51.842 "num_base_bdevs": 3, 00:11:51.842 "num_base_bdevs_discovered": 0, 00:11:51.842 "num_base_bdevs_operational": 3, 00:11:51.842 "base_bdevs_list": [ 00:11:51.842 { 00:11:51.842 "name": "BaseBdev1", 00:11:51.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.843 "is_configured": false, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 0 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "name": "BaseBdev2", 00:11:51.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.843 "is_configured": false, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 0 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "name": "BaseBdev3", 00:11:51.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.843 "is_configured": false, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 0 00:11:51.843 } 00:11:51.843 ] 00:11:51.843 }' 00:11:51.843 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.843 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.410 [2024-12-05 19:31:45.569141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.410 [2024-12-05 19:31:45.569335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.410 [2024-12-05 19:31:45.577138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.410 [2024-12-05 19:31:45.577210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.410 [2024-12-05 19:31:45.577236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.410 [2024-12-05 19:31:45.577252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.410 [2024-12-05 19:31:45.577261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.410 [2024-12-05 19:31:45.577276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.410 [2024-12-05 19:31:45.623842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.410 BaseBdev1 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.410 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.411 [ 00:11:52.411 { 00:11:52.411 "name": "BaseBdev1", 00:11:52.411 "aliases": [ 00:11:52.411 "4c8a946a-6936-4f12-837a-595a732ddd26" 00:11:52.411 ], 00:11:52.411 "product_name": "Malloc disk", 00:11:52.411 "block_size": 512, 00:11:52.411 "num_blocks": 65536, 00:11:52.411 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:52.411 "assigned_rate_limits": { 00:11:52.411 "rw_ios_per_sec": 0, 00:11:52.411 "rw_mbytes_per_sec": 0, 00:11:52.411 "r_mbytes_per_sec": 0, 00:11:52.411 "w_mbytes_per_sec": 0 00:11:52.411 }, 00:11:52.411 "claimed": true, 00:11:52.411 "claim_type": "exclusive_write", 00:11:52.411 "zoned": false, 00:11:52.411 "supported_io_types": { 00:11:52.411 "read": true, 00:11:52.411 "write": true, 00:11:52.411 "unmap": true, 00:11:52.411 "flush": true, 00:11:52.411 "reset": true, 00:11:52.411 "nvme_admin": false, 00:11:52.411 "nvme_io": false, 00:11:52.411 "nvme_io_md": false, 00:11:52.411 "write_zeroes": true, 00:11:52.411 "zcopy": true, 00:11:52.411 "get_zone_info": false, 00:11:52.411 "zone_management": false, 00:11:52.411 "zone_append": false, 00:11:52.411 "compare": false, 00:11:52.411 "compare_and_write": false, 00:11:52.411 "abort": true, 00:11:52.411 "seek_hole": false, 00:11:52.411 "seek_data": false, 00:11:52.411 "copy": true, 00:11:52.411 "nvme_iov_md": false 00:11:52.411 }, 00:11:52.411 "memory_domains": [ 00:11:52.411 { 00:11:52.411 "dma_device_id": "system", 00:11:52.411 "dma_device_type": 1 00:11:52.411 }, 00:11:52.411 { 00:11:52.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.411 "dma_device_type": 2 00:11:52.411 } 00:11:52.411 ], 00:11:52.411 "driver_specific": {} 00:11:52.411 } 00:11:52.411 ] 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.411 "name": "Existed_Raid", 00:11:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.411 "strip_size_kb": 64, 00:11:52.411 "state": "configuring", 00:11:52.411 "raid_level": "raid0", 00:11:52.411 "superblock": false, 00:11:52.411 "num_base_bdevs": 3, 00:11:52.411 "num_base_bdevs_discovered": 1, 00:11:52.411 "num_base_bdevs_operational": 3, 00:11:52.411 "base_bdevs_list": [ 00:11:52.411 { 00:11:52.411 "name": "BaseBdev1", 00:11:52.411 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:52.411 "is_configured": true, 00:11:52.411 "data_offset": 0, 00:11:52.411 "data_size": 65536 00:11:52.411 }, 00:11:52.411 { 00:11:52.411 "name": "BaseBdev2", 00:11:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.411 "is_configured": false, 00:11:52.411 "data_offset": 0, 00:11:52.411 "data_size": 0 00:11:52.411 }, 00:11:52.411 { 00:11:52.411 "name": "BaseBdev3", 00:11:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.411 "is_configured": false, 00:11:52.411 "data_offset": 0, 00:11:52.411 "data_size": 0 00:11:52.411 } 00:11:52.411 ] 00:11:52.411 }' 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.411 19:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.980 [2024-12-05 19:31:46.160056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.980 [2024-12-05 19:31:46.160280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.980 [2024-12-05 19:31:46.172100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.980 [2024-12-05 19:31:46.174597] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.980 [2024-12-05 19:31:46.174804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.980 [2024-12-05 19:31:46.174938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.980 [2024-12-05 19:31:46.175000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.980 "name": "Existed_Raid", 00:11:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.980 "strip_size_kb": 64, 00:11:52.980 "state": "configuring", 00:11:52.980 "raid_level": "raid0", 00:11:52.980 "superblock": false, 00:11:52.980 "num_base_bdevs": 3, 00:11:52.980 "num_base_bdevs_discovered": 1, 00:11:52.980 "num_base_bdevs_operational": 3, 00:11:52.980 "base_bdevs_list": [ 00:11:52.980 { 00:11:52.980 "name": "BaseBdev1", 00:11:52.980 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:52.980 "is_configured": true, 00:11:52.980 "data_offset": 0, 00:11:52.980 "data_size": 65536 00:11:52.980 }, 00:11:52.980 { 00:11:52.980 "name": "BaseBdev2", 00:11:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.980 "is_configured": false, 00:11:52.980 "data_offset": 0, 00:11:52.980 "data_size": 0 00:11:52.980 }, 00:11:52.980 { 00:11:52.980 "name": "BaseBdev3", 00:11:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.980 "is_configured": false, 00:11:52.980 "data_offset": 0, 00:11:52.980 "data_size": 0 00:11:52.980 } 00:11:52.980 ] 00:11:52.980 }' 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.980 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 [2024-12-05 19:31:46.755710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.548 BaseBdev2 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 [ 00:11:53.548 { 00:11:53.548 "name": "BaseBdev2", 00:11:53.548 "aliases": [ 00:11:53.548 "9502965c-aeb1-46f0-af8b-9fde7574705e" 00:11:53.548 ], 00:11:53.548 "product_name": "Malloc disk", 00:11:53.548 "block_size": 512, 00:11:53.548 "num_blocks": 65536, 00:11:53.548 "uuid": "9502965c-aeb1-46f0-af8b-9fde7574705e", 00:11:53.548 "assigned_rate_limits": { 00:11:53.548 "rw_ios_per_sec": 0, 00:11:53.548 "rw_mbytes_per_sec": 0, 00:11:53.548 "r_mbytes_per_sec": 0, 00:11:53.548 "w_mbytes_per_sec": 0 00:11:53.548 }, 00:11:53.548 "claimed": true, 00:11:53.548 "claim_type": "exclusive_write", 00:11:53.548 "zoned": false, 00:11:53.548 "supported_io_types": { 00:11:53.548 "read": true, 00:11:53.548 "write": true, 00:11:53.548 "unmap": true, 00:11:53.548 "flush": true, 00:11:53.548 "reset": true, 00:11:53.548 "nvme_admin": false, 00:11:53.548 "nvme_io": false, 00:11:53.548 "nvme_io_md": false, 00:11:53.548 "write_zeroes": true, 00:11:53.548 "zcopy": true, 00:11:53.548 "get_zone_info": false, 00:11:53.548 "zone_management": false, 00:11:53.548 "zone_append": false, 00:11:53.548 "compare": false, 00:11:53.548 "compare_and_write": false, 00:11:53.548 "abort": true, 00:11:53.548 "seek_hole": false, 00:11:53.548 "seek_data": false, 00:11:53.548 "copy": true, 00:11:53.548 "nvme_iov_md": false 00:11:53.548 }, 00:11:53.548 "memory_domains": [ 00:11:53.548 { 00:11:53.548 "dma_device_id": "system", 00:11:53.548 "dma_device_type": 1 00:11:53.548 }, 00:11:53.548 { 00:11:53.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.548 "dma_device_type": 2 00:11:53.548 } 00:11:53.548 ], 00:11:53.548 "driver_specific": {} 00:11:53.548 } 00:11:53.548 ] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.548 "name": "Existed_Raid", 00:11:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.548 "strip_size_kb": 64, 00:11:53.548 "state": "configuring", 00:11:53.548 "raid_level": "raid0", 00:11:53.548 "superblock": false, 00:11:53.548 "num_base_bdevs": 3, 00:11:53.548 "num_base_bdevs_discovered": 2, 00:11:53.548 "num_base_bdevs_operational": 3, 00:11:53.548 "base_bdevs_list": [ 00:11:53.548 { 00:11:53.548 "name": "BaseBdev1", 00:11:53.548 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:53.548 "is_configured": true, 00:11:53.548 "data_offset": 0, 00:11:53.548 "data_size": 65536 00:11:53.548 }, 00:11:53.548 { 00:11:53.548 "name": "BaseBdev2", 00:11:53.548 "uuid": "9502965c-aeb1-46f0-af8b-9fde7574705e", 00:11:53.548 "is_configured": true, 00:11:53.548 "data_offset": 0, 00:11:53.548 "data_size": 65536 00:11:53.548 }, 00:11:53.548 { 00:11:53.548 "name": "BaseBdev3", 00:11:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.548 "is_configured": false, 00:11:53.548 "data_offset": 0, 00:11:53.548 "data_size": 0 00:11:53.548 } 00:11:53.548 ] 00:11:53.548 }' 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.548 19:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 [2024-12-05 19:31:47.381473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.117 [2024-12-05 19:31:47.381527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:54.117 [2024-12-05 19:31:47.381548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:54.117 [2024-12-05 19:31:47.381993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:54.117 [2024-12-05 19:31:47.382219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:54.117 [2024-12-05 19:31:47.382237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:54.117 [2024-12-05 19:31:47.382578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.117 BaseBdev3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 [ 00:11:54.117 { 00:11:54.117 "name": "BaseBdev3", 00:11:54.117 "aliases": [ 00:11:54.117 "c3820ae5-3649-4998-973d-a94c63c31b55" 00:11:54.117 ], 00:11:54.117 "product_name": "Malloc disk", 00:11:54.117 "block_size": 512, 00:11:54.117 "num_blocks": 65536, 00:11:54.117 "uuid": "c3820ae5-3649-4998-973d-a94c63c31b55", 00:11:54.117 "assigned_rate_limits": { 00:11:54.117 "rw_ios_per_sec": 0, 00:11:54.117 "rw_mbytes_per_sec": 0, 00:11:54.117 "r_mbytes_per_sec": 0, 00:11:54.117 "w_mbytes_per_sec": 0 00:11:54.117 }, 00:11:54.117 "claimed": true, 00:11:54.117 "claim_type": "exclusive_write", 00:11:54.117 "zoned": false, 00:11:54.117 "supported_io_types": { 00:11:54.117 "read": true, 00:11:54.117 "write": true, 00:11:54.117 "unmap": true, 00:11:54.117 "flush": true, 00:11:54.117 "reset": true, 00:11:54.117 "nvme_admin": false, 00:11:54.117 "nvme_io": false, 00:11:54.117 "nvme_io_md": false, 00:11:54.117 "write_zeroes": true, 00:11:54.117 "zcopy": true, 00:11:54.117 "get_zone_info": false, 00:11:54.117 "zone_management": false, 00:11:54.117 "zone_append": false, 00:11:54.117 "compare": false, 00:11:54.117 "compare_and_write": false, 00:11:54.117 "abort": true, 00:11:54.117 "seek_hole": false, 00:11:54.117 "seek_data": false, 00:11:54.117 "copy": true, 00:11:54.117 "nvme_iov_md": false 00:11:54.117 }, 00:11:54.117 "memory_domains": [ 00:11:54.117 { 00:11:54.117 "dma_device_id": "system", 00:11:54.117 "dma_device_type": 1 00:11:54.117 }, 00:11:54.117 { 00:11:54.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.117 "dma_device_type": 2 00:11:54.117 } 00:11:54.117 ], 00:11:54.117 "driver_specific": {} 00:11:54.117 } 00:11:54.117 ] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.117 "name": "Existed_Raid", 00:11:54.117 "uuid": "091f60c5-e3a9-499d-960e-6b6620752b9a", 00:11:54.117 "strip_size_kb": 64, 00:11:54.117 "state": "online", 00:11:54.117 "raid_level": "raid0", 00:11:54.117 "superblock": false, 00:11:54.117 "num_base_bdevs": 3, 00:11:54.117 "num_base_bdevs_discovered": 3, 00:11:54.117 "num_base_bdevs_operational": 3, 00:11:54.117 "base_bdevs_list": [ 00:11:54.117 { 00:11:54.117 "name": "BaseBdev1", 00:11:54.117 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:54.117 "is_configured": true, 00:11:54.117 "data_offset": 0, 00:11:54.117 "data_size": 65536 00:11:54.117 }, 00:11:54.117 { 00:11:54.117 "name": "BaseBdev2", 00:11:54.117 "uuid": "9502965c-aeb1-46f0-af8b-9fde7574705e", 00:11:54.117 "is_configured": true, 00:11:54.117 "data_offset": 0, 00:11:54.117 "data_size": 65536 00:11:54.117 }, 00:11:54.117 { 00:11:54.117 "name": "BaseBdev3", 00:11:54.117 "uuid": "c3820ae5-3649-4998-973d-a94c63c31b55", 00:11:54.117 "is_configured": true, 00:11:54.117 "data_offset": 0, 00:11:54.117 "data_size": 65536 00:11:54.117 } 00:11:54.117 ] 00:11:54.117 }' 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.117 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 [2024-12-05 19:31:47.934093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.686 "name": "Existed_Raid", 00:11:54.686 "aliases": [ 00:11:54.686 "091f60c5-e3a9-499d-960e-6b6620752b9a" 00:11:54.686 ], 00:11:54.686 "product_name": "Raid Volume", 00:11:54.686 "block_size": 512, 00:11:54.686 "num_blocks": 196608, 00:11:54.686 "uuid": "091f60c5-e3a9-499d-960e-6b6620752b9a", 00:11:54.686 "assigned_rate_limits": { 00:11:54.686 "rw_ios_per_sec": 0, 00:11:54.686 "rw_mbytes_per_sec": 0, 00:11:54.686 "r_mbytes_per_sec": 0, 00:11:54.686 "w_mbytes_per_sec": 0 00:11:54.686 }, 00:11:54.686 "claimed": false, 00:11:54.686 "zoned": false, 00:11:54.686 "supported_io_types": { 00:11:54.686 "read": true, 00:11:54.686 "write": true, 00:11:54.686 "unmap": true, 00:11:54.686 "flush": true, 00:11:54.686 "reset": true, 00:11:54.686 "nvme_admin": false, 00:11:54.686 "nvme_io": false, 00:11:54.686 "nvme_io_md": false, 00:11:54.686 "write_zeroes": true, 00:11:54.686 "zcopy": false, 00:11:54.686 "get_zone_info": false, 00:11:54.686 "zone_management": false, 00:11:54.686 "zone_append": false, 00:11:54.686 "compare": false, 00:11:54.686 "compare_and_write": false, 00:11:54.686 "abort": false, 00:11:54.686 "seek_hole": false, 00:11:54.686 "seek_data": false, 00:11:54.686 "copy": false, 00:11:54.686 "nvme_iov_md": false 00:11:54.686 }, 00:11:54.686 "memory_domains": [ 00:11:54.686 { 00:11:54.686 "dma_device_id": "system", 00:11:54.686 "dma_device_type": 1 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.686 "dma_device_type": 2 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "system", 00:11:54.686 "dma_device_type": 1 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.686 "dma_device_type": 2 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "system", 00:11:54.686 "dma_device_type": 1 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.686 "dma_device_type": 2 00:11:54.686 } 00:11:54.686 ], 00:11:54.686 "driver_specific": { 00:11:54.686 "raid": { 00:11:54.686 "uuid": "091f60c5-e3a9-499d-960e-6b6620752b9a", 00:11:54.686 "strip_size_kb": 64, 00:11:54.686 "state": "online", 00:11:54.686 "raid_level": "raid0", 00:11:54.686 "superblock": false, 00:11:54.686 "num_base_bdevs": 3, 00:11:54.686 "num_base_bdevs_discovered": 3, 00:11:54.686 "num_base_bdevs_operational": 3, 00:11:54.686 "base_bdevs_list": [ 00:11:54.686 { 00:11:54.686 "name": "BaseBdev1", 00:11:54.686 "uuid": "4c8a946a-6936-4f12-837a-595a732ddd26", 00:11:54.686 "is_configured": true, 00:11:54.686 "data_offset": 0, 00:11:54.686 "data_size": 65536 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "name": "BaseBdev2", 00:11:54.686 "uuid": "9502965c-aeb1-46f0-af8b-9fde7574705e", 00:11:54.686 "is_configured": true, 00:11:54.686 "data_offset": 0, 00:11:54.686 "data_size": 65536 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "name": "BaseBdev3", 00:11:54.686 "uuid": "c3820ae5-3649-4998-973d-a94c63c31b55", 00:11:54.686 "is_configured": true, 00:11:54.686 "data_offset": 0, 00:11:54.686 "data_size": 65536 00:11:54.686 } 00:11:54.686 ] 00:11:54.686 } 00:11:54.686 } 00:11:54.686 }' 00:11:54.686 19:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.686 BaseBdev2 00:11:54.686 BaseBdev3' 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.946 [2024-12-05 19:31:48.237796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.946 [2024-12-05 19:31:48.237832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.946 [2024-12-05 19:31:48.237904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.946 "name": "Existed_Raid", 00:11:54.946 "uuid": "091f60c5-e3a9-499d-960e-6b6620752b9a", 00:11:54.946 "strip_size_kb": 64, 00:11:54.946 "state": "offline", 00:11:54.946 "raid_level": "raid0", 00:11:54.946 "superblock": false, 00:11:54.946 "num_base_bdevs": 3, 00:11:54.946 "num_base_bdevs_discovered": 2, 00:11:54.946 "num_base_bdevs_operational": 2, 00:11:54.946 "base_bdevs_list": [ 00:11:54.946 { 00:11:54.946 "name": null, 00:11:54.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.946 "is_configured": false, 00:11:54.946 "data_offset": 0, 00:11:54.946 "data_size": 65536 00:11:54.946 }, 00:11:54.946 { 00:11:54.946 "name": "BaseBdev2", 00:11:54.946 "uuid": "9502965c-aeb1-46f0-af8b-9fde7574705e", 00:11:54.946 "is_configured": true, 00:11:54.946 "data_offset": 0, 00:11:54.946 "data_size": 65536 00:11:54.946 }, 00:11:54.946 { 00:11:54.946 "name": "BaseBdev3", 00:11:54.946 "uuid": "c3820ae5-3649-4998-973d-a94c63c31b55", 00:11:54.946 "is_configured": true, 00:11:54.946 "data_offset": 0, 00:11:54.946 "data_size": 65536 00:11:54.946 } 00:11:54.946 ] 00:11:54.946 }' 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.946 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.515 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.515 [2024-12-05 19:31:48.893532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.773 19:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.773 [2024-12-05 19:31:49.053634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.773 [2024-12-05 19:31:49.053716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.773 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.774 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.053 BaseBdev2 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.053 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.053 [ 00:11:56.053 { 00:11:56.053 "name": "BaseBdev2", 00:11:56.053 "aliases": [ 00:11:56.053 "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5" 00:11:56.053 ], 00:11:56.053 "product_name": "Malloc disk", 00:11:56.053 "block_size": 512, 00:11:56.053 "num_blocks": 65536, 00:11:56.053 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:56.053 "assigned_rate_limits": { 00:11:56.053 "rw_ios_per_sec": 0, 00:11:56.053 "rw_mbytes_per_sec": 0, 00:11:56.053 "r_mbytes_per_sec": 0, 00:11:56.053 "w_mbytes_per_sec": 0 00:11:56.053 }, 00:11:56.054 "claimed": false, 00:11:56.054 "zoned": false, 00:11:56.054 "supported_io_types": { 00:11:56.054 "read": true, 00:11:56.054 "write": true, 00:11:56.054 "unmap": true, 00:11:56.054 "flush": true, 00:11:56.054 "reset": true, 00:11:56.054 "nvme_admin": false, 00:11:56.054 "nvme_io": false, 00:11:56.054 "nvme_io_md": false, 00:11:56.054 "write_zeroes": true, 00:11:56.054 "zcopy": true, 00:11:56.054 "get_zone_info": false, 00:11:56.054 "zone_management": false, 00:11:56.054 "zone_append": false, 00:11:56.054 "compare": false, 00:11:56.054 "compare_and_write": false, 00:11:56.054 "abort": true, 00:11:56.054 "seek_hole": false, 00:11:56.054 "seek_data": false, 00:11:56.054 "copy": true, 00:11:56.054 "nvme_iov_md": false 00:11:56.054 }, 00:11:56.054 "memory_domains": [ 00:11:56.054 { 00:11:56.054 "dma_device_id": "system", 00:11:56.054 "dma_device_type": 1 00:11:56.054 }, 00:11:56.054 { 00:11:56.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.054 "dma_device_type": 2 00:11:56.054 } 00:11:56.054 ], 00:11:56.054 "driver_specific": {} 00:11:56.054 } 00:11:56.054 ] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.054 BaseBdev3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.054 [ 00:11:56.054 { 00:11:56.054 "name": "BaseBdev3", 00:11:56.054 "aliases": [ 00:11:56.054 "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe" 00:11:56.054 ], 00:11:56.054 "product_name": "Malloc disk", 00:11:56.054 "block_size": 512, 00:11:56.054 "num_blocks": 65536, 00:11:56.054 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:56.054 "assigned_rate_limits": { 00:11:56.054 "rw_ios_per_sec": 0, 00:11:56.054 "rw_mbytes_per_sec": 0, 00:11:56.054 "r_mbytes_per_sec": 0, 00:11:56.054 "w_mbytes_per_sec": 0 00:11:56.054 }, 00:11:56.054 "claimed": false, 00:11:56.054 "zoned": false, 00:11:56.054 "supported_io_types": { 00:11:56.054 "read": true, 00:11:56.054 "write": true, 00:11:56.054 "unmap": true, 00:11:56.054 "flush": true, 00:11:56.054 "reset": true, 00:11:56.054 "nvme_admin": false, 00:11:56.054 "nvme_io": false, 00:11:56.054 "nvme_io_md": false, 00:11:56.054 "write_zeroes": true, 00:11:56.054 "zcopy": true, 00:11:56.054 "get_zone_info": false, 00:11:56.054 "zone_management": false, 00:11:56.054 "zone_append": false, 00:11:56.054 "compare": false, 00:11:56.054 "compare_and_write": false, 00:11:56.054 "abort": true, 00:11:56.054 "seek_hole": false, 00:11:56.054 "seek_data": false, 00:11:56.054 "copy": true, 00:11:56.054 "nvme_iov_md": false 00:11:56.054 }, 00:11:56.054 "memory_domains": [ 00:11:56.054 { 00:11:56.054 "dma_device_id": "system", 00:11:56.054 "dma_device_type": 1 00:11:56.054 }, 00:11:56.054 { 00:11:56.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.054 "dma_device_type": 2 00:11:56.054 } 00:11:56.054 ], 00:11:56.054 "driver_specific": {} 00:11:56.054 } 00:11:56.054 ] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.054 [2024-12-05 19:31:49.338018] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.054 [2024-12-05 19:31:49.338078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.054 [2024-12-05 19:31:49.338111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.054 [2024-12-05 19:31:49.340563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.054 "name": "Existed_Raid", 00:11:56.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.054 "strip_size_kb": 64, 00:11:56.054 "state": "configuring", 00:11:56.054 "raid_level": "raid0", 00:11:56.054 "superblock": false, 00:11:56.054 "num_base_bdevs": 3, 00:11:56.054 "num_base_bdevs_discovered": 2, 00:11:56.054 "num_base_bdevs_operational": 3, 00:11:56.054 "base_bdevs_list": [ 00:11:56.054 { 00:11:56.054 "name": "BaseBdev1", 00:11:56.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.054 "is_configured": false, 00:11:56.054 "data_offset": 0, 00:11:56.054 "data_size": 0 00:11:56.054 }, 00:11:56.054 { 00:11:56.054 "name": "BaseBdev2", 00:11:56.054 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:56.054 "is_configured": true, 00:11:56.054 "data_offset": 0, 00:11:56.054 "data_size": 65536 00:11:56.054 }, 00:11:56.054 { 00:11:56.054 "name": "BaseBdev3", 00:11:56.054 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:56.054 "is_configured": true, 00:11:56.054 "data_offset": 0, 00:11:56.054 "data_size": 65536 00:11:56.054 } 00:11:56.054 ] 00:11:56.054 }' 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.054 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.687 [2024-12-05 19:31:49.830192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.687 "name": "Existed_Raid", 00:11:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.687 "strip_size_kb": 64, 00:11:56.687 "state": "configuring", 00:11:56.687 "raid_level": "raid0", 00:11:56.687 "superblock": false, 00:11:56.687 "num_base_bdevs": 3, 00:11:56.687 "num_base_bdevs_discovered": 1, 00:11:56.687 "num_base_bdevs_operational": 3, 00:11:56.687 "base_bdevs_list": [ 00:11:56.687 { 00:11:56.687 "name": "BaseBdev1", 00:11:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.687 "is_configured": false, 00:11:56.687 "data_offset": 0, 00:11:56.687 "data_size": 0 00:11:56.687 }, 00:11:56.687 { 00:11:56.687 "name": null, 00:11:56.687 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:56.687 "is_configured": false, 00:11:56.687 "data_offset": 0, 00:11:56.687 "data_size": 65536 00:11:56.687 }, 00:11:56.687 { 00:11:56.687 "name": "BaseBdev3", 00:11:56.687 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:56.687 "is_configured": true, 00:11:56.687 "data_offset": 0, 00:11:56.687 "data_size": 65536 00:11:56.687 } 00:11:56.687 ] 00:11:56.687 }' 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.687 19:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.946 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.946 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.946 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.946 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.946 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.205 [2024-12-05 19:31:50.425990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.205 BaseBdev1 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.205 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.205 [ 00:11:57.205 { 00:11:57.205 "name": "BaseBdev1", 00:11:57.205 "aliases": [ 00:11:57.205 "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5" 00:11:57.205 ], 00:11:57.205 "product_name": "Malloc disk", 00:11:57.205 "block_size": 512, 00:11:57.205 "num_blocks": 65536, 00:11:57.205 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:57.205 "assigned_rate_limits": { 00:11:57.205 "rw_ios_per_sec": 0, 00:11:57.205 "rw_mbytes_per_sec": 0, 00:11:57.205 "r_mbytes_per_sec": 0, 00:11:57.205 "w_mbytes_per_sec": 0 00:11:57.205 }, 00:11:57.205 "claimed": true, 00:11:57.205 "claim_type": "exclusive_write", 00:11:57.205 "zoned": false, 00:11:57.205 "supported_io_types": { 00:11:57.205 "read": true, 00:11:57.205 "write": true, 00:11:57.205 "unmap": true, 00:11:57.205 "flush": true, 00:11:57.205 "reset": true, 00:11:57.205 "nvme_admin": false, 00:11:57.205 "nvme_io": false, 00:11:57.205 "nvme_io_md": false, 00:11:57.205 "write_zeroes": true, 00:11:57.206 "zcopy": true, 00:11:57.206 "get_zone_info": false, 00:11:57.206 "zone_management": false, 00:11:57.206 "zone_append": false, 00:11:57.206 "compare": false, 00:11:57.206 "compare_and_write": false, 00:11:57.206 "abort": true, 00:11:57.206 "seek_hole": false, 00:11:57.206 "seek_data": false, 00:11:57.206 "copy": true, 00:11:57.206 "nvme_iov_md": false 00:11:57.206 }, 00:11:57.206 "memory_domains": [ 00:11:57.206 { 00:11:57.206 "dma_device_id": "system", 00:11:57.206 "dma_device_type": 1 00:11:57.206 }, 00:11:57.206 { 00:11:57.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.206 "dma_device_type": 2 00:11:57.206 } 00:11:57.206 ], 00:11:57.206 "driver_specific": {} 00:11:57.206 } 00:11:57.206 ] 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.206 "name": "Existed_Raid", 00:11:57.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.206 "strip_size_kb": 64, 00:11:57.206 "state": "configuring", 00:11:57.206 "raid_level": "raid0", 00:11:57.206 "superblock": false, 00:11:57.206 "num_base_bdevs": 3, 00:11:57.206 "num_base_bdevs_discovered": 2, 00:11:57.206 "num_base_bdevs_operational": 3, 00:11:57.206 "base_bdevs_list": [ 00:11:57.206 { 00:11:57.206 "name": "BaseBdev1", 00:11:57.206 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:57.206 "is_configured": true, 00:11:57.206 "data_offset": 0, 00:11:57.206 "data_size": 65536 00:11:57.206 }, 00:11:57.206 { 00:11:57.206 "name": null, 00:11:57.206 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:57.206 "is_configured": false, 00:11:57.206 "data_offset": 0, 00:11:57.206 "data_size": 65536 00:11:57.206 }, 00:11:57.206 { 00:11:57.206 "name": "BaseBdev3", 00:11:57.206 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:57.206 "is_configured": true, 00:11:57.206 "data_offset": 0, 00:11:57.206 "data_size": 65536 00:11:57.206 } 00:11:57.206 ] 00:11:57.206 }' 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.206 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.772 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.773 19:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.773 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.773 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 19:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 [2024-12-05 19:31:51.026248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.773 "name": "Existed_Raid", 00:11:57.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.773 "strip_size_kb": 64, 00:11:57.773 "state": "configuring", 00:11:57.773 "raid_level": "raid0", 00:11:57.773 "superblock": false, 00:11:57.773 "num_base_bdevs": 3, 00:11:57.773 "num_base_bdevs_discovered": 1, 00:11:57.773 "num_base_bdevs_operational": 3, 00:11:57.773 "base_bdevs_list": [ 00:11:57.773 { 00:11:57.773 "name": "BaseBdev1", 00:11:57.773 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:57.773 "is_configured": true, 00:11:57.773 "data_offset": 0, 00:11:57.773 "data_size": 65536 00:11:57.773 }, 00:11:57.773 { 00:11:57.773 "name": null, 00:11:57.773 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:57.773 "is_configured": false, 00:11:57.773 "data_offset": 0, 00:11:57.773 "data_size": 65536 00:11:57.773 }, 00:11:57.773 { 00:11:57.773 "name": null, 00:11:57.773 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:57.773 "is_configured": false, 00:11:57.773 "data_offset": 0, 00:11:57.773 "data_size": 65536 00:11:57.773 } 00:11:57.773 ] 00:11:57.773 }' 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.773 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 [2024-12-05 19:31:51.582444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.394 "name": "Existed_Raid", 00:11:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.394 "strip_size_kb": 64, 00:11:58.394 "state": "configuring", 00:11:58.394 "raid_level": "raid0", 00:11:58.394 "superblock": false, 00:11:58.394 "num_base_bdevs": 3, 00:11:58.394 "num_base_bdevs_discovered": 2, 00:11:58.394 "num_base_bdevs_operational": 3, 00:11:58.394 "base_bdevs_list": [ 00:11:58.394 { 00:11:58.394 "name": "BaseBdev1", 00:11:58.394 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:58.394 "is_configured": true, 00:11:58.394 "data_offset": 0, 00:11:58.394 "data_size": 65536 00:11:58.394 }, 00:11:58.394 { 00:11:58.394 "name": null, 00:11:58.394 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:58.394 "is_configured": false, 00:11:58.394 "data_offset": 0, 00:11:58.394 "data_size": 65536 00:11:58.394 }, 00:11:58.394 { 00:11:58.394 "name": "BaseBdev3", 00:11:58.394 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:58.394 "is_configured": true, 00:11:58.394 "data_offset": 0, 00:11:58.394 "data_size": 65536 00:11:58.394 } 00:11:58.394 ] 00:11:58.394 }' 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.394 19:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.653 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.653 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.653 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.653 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.910 [2024-12-05 19:31:52.142654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.910 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.911 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.911 "name": "Existed_Raid", 00:11:58.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.911 "strip_size_kb": 64, 00:11:58.911 "state": "configuring", 00:11:58.911 "raid_level": "raid0", 00:11:58.911 "superblock": false, 00:11:58.911 "num_base_bdevs": 3, 00:11:58.911 "num_base_bdevs_discovered": 1, 00:11:58.911 "num_base_bdevs_operational": 3, 00:11:58.911 "base_bdevs_list": [ 00:11:58.911 { 00:11:58.911 "name": null, 00:11:58.911 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:58.911 "is_configured": false, 00:11:58.911 "data_offset": 0, 00:11:58.911 "data_size": 65536 00:11:58.911 }, 00:11:58.911 { 00:11:58.911 "name": null, 00:11:58.911 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:58.911 "is_configured": false, 00:11:58.911 "data_offset": 0, 00:11:58.911 "data_size": 65536 00:11:58.911 }, 00:11:58.911 { 00:11:58.911 "name": "BaseBdev3", 00:11:58.911 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:58.911 "is_configured": true, 00:11:58.911 "data_offset": 0, 00:11:58.911 "data_size": 65536 00:11:58.911 } 00:11:58.911 ] 00:11:58.911 }' 00:11:58.911 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.911 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.478 [2024-12-05 19:31:52.804635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.478 "name": "Existed_Raid", 00:11:59.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.478 "strip_size_kb": 64, 00:11:59.478 "state": "configuring", 00:11:59.478 "raid_level": "raid0", 00:11:59.478 "superblock": false, 00:11:59.478 "num_base_bdevs": 3, 00:11:59.478 "num_base_bdevs_discovered": 2, 00:11:59.478 "num_base_bdevs_operational": 3, 00:11:59.478 "base_bdevs_list": [ 00:11:59.478 { 00:11:59.478 "name": null, 00:11:59.478 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:11:59.478 "is_configured": false, 00:11:59.478 "data_offset": 0, 00:11:59.478 "data_size": 65536 00:11:59.478 }, 00:11:59.478 { 00:11:59.478 "name": "BaseBdev2", 00:11:59.478 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:11:59.478 "is_configured": true, 00:11:59.478 "data_offset": 0, 00:11:59.478 "data_size": 65536 00:11:59.478 }, 00:11:59.478 { 00:11:59.478 "name": "BaseBdev3", 00:11:59.478 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:11:59.478 "is_configured": true, 00:11:59.478 "data_offset": 0, 00:11:59.478 "data_size": 65536 00:11:59.478 } 00:11:59.478 ] 00:11:59.478 }' 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.478 19:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ec83949d-db5b-4b59-a0c3-cd7e7f30ade5 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 [2024-12-05 19:31:53.470958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:00.044 [2024-12-05 19:31:53.471012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.044 [2024-12-05 19:31:53.471028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:00.044 [2024-12-05 19:31:53.471341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:00.044 [2024-12-05 19:31:53.471539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.044 [2024-12-05 19:31:53.471556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:00.044 NewBaseBdev 00:12:00.044 [2024-12-05 19:31:53.471903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.044 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.302 [ 00:12:00.302 { 00:12:00.302 "name": "NewBaseBdev", 00:12:00.302 "aliases": [ 00:12:00.302 "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5" 00:12:00.302 ], 00:12:00.302 "product_name": "Malloc disk", 00:12:00.302 "block_size": 512, 00:12:00.302 "num_blocks": 65536, 00:12:00.302 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:12:00.302 "assigned_rate_limits": { 00:12:00.302 "rw_ios_per_sec": 0, 00:12:00.302 "rw_mbytes_per_sec": 0, 00:12:00.302 "r_mbytes_per_sec": 0, 00:12:00.302 "w_mbytes_per_sec": 0 00:12:00.302 }, 00:12:00.302 "claimed": true, 00:12:00.302 "claim_type": "exclusive_write", 00:12:00.302 "zoned": false, 00:12:00.302 "supported_io_types": { 00:12:00.302 "read": true, 00:12:00.302 "write": true, 00:12:00.302 "unmap": true, 00:12:00.302 "flush": true, 00:12:00.302 "reset": true, 00:12:00.302 "nvme_admin": false, 00:12:00.302 "nvme_io": false, 00:12:00.302 "nvme_io_md": false, 00:12:00.302 "write_zeroes": true, 00:12:00.302 "zcopy": true, 00:12:00.302 "get_zone_info": false, 00:12:00.302 "zone_management": false, 00:12:00.302 "zone_append": false, 00:12:00.302 "compare": false, 00:12:00.302 "compare_and_write": false, 00:12:00.302 "abort": true, 00:12:00.302 "seek_hole": false, 00:12:00.302 "seek_data": false, 00:12:00.302 "copy": true, 00:12:00.302 "nvme_iov_md": false 00:12:00.302 }, 00:12:00.302 "memory_domains": [ 00:12:00.302 { 00:12:00.302 "dma_device_id": "system", 00:12:00.302 "dma_device_type": 1 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.302 "dma_device_type": 2 00:12:00.302 } 00:12:00.302 ], 00:12:00.302 "driver_specific": {} 00:12:00.302 } 00:12:00.302 ] 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.302 "name": "Existed_Raid", 00:12:00.302 "uuid": "8882e8bd-ba57-4c75-b6a3-2803c458932a", 00:12:00.302 "strip_size_kb": 64, 00:12:00.302 "state": "online", 00:12:00.302 "raid_level": "raid0", 00:12:00.302 "superblock": false, 00:12:00.302 "num_base_bdevs": 3, 00:12:00.302 "num_base_bdevs_discovered": 3, 00:12:00.302 "num_base_bdevs_operational": 3, 00:12:00.302 "base_bdevs_list": [ 00:12:00.302 { 00:12:00.302 "name": "NewBaseBdev", 00:12:00.302 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 0, 00:12:00.302 "data_size": 65536 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "name": "BaseBdev2", 00:12:00.302 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 0, 00:12:00.302 "data_size": 65536 00:12:00.302 }, 00:12:00.302 { 00:12:00.302 "name": "BaseBdev3", 00:12:00.302 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:12:00.302 "is_configured": true, 00:12:00.302 "data_offset": 0, 00:12:00.302 "data_size": 65536 00:12:00.302 } 00:12:00.302 ] 00:12:00.302 }' 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.302 19:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.866 [2024-12-05 19:31:54.023542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.866 "name": "Existed_Raid", 00:12:00.866 "aliases": [ 00:12:00.866 "8882e8bd-ba57-4c75-b6a3-2803c458932a" 00:12:00.866 ], 00:12:00.866 "product_name": "Raid Volume", 00:12:00.866 "block_size": 512, 00:12:00.866 "num_blocks": 196608, 00:12:00.866 "uuid": "8882e8bd-ba57-4c75-b6a3-2803c458932a", 00:12:00.866 "assigned_rate_limits": { 00:12:00.866 "rw_ios_per_sec": 0, 00:12:00.866 "rw_mbytes_per_sec": 0, 00:12:00.866 "r_mbytes_per_sec": 0, 00:12:00.866 "w_mbytes_per_sec": 0 00:12:00.866 }, 00:12:00.866 "claimed": false, 00:12:00.866 "zoned": false, 00:12:00.866 "supported_io_types": { 00:12:00.866 "read": true, 00:12:00.866 "write": true, 00:12:00.866 "unmap": true, 00:12:00.866 "flush": true, 00:12:00.866 "reset": true, 00:12:00.866 "nvme_admin": false, 00:12:00.866 "nvme_io": false, 00:12:00.866 "nvme_io_md": false, 00:12:00.866 "write_zeroes": true, 00:12:00.866 "zcopy": false, 00:12:00.866 "get_zone_info": false, 00:12:00.866 "zone_management": false, 00:12:00.866 "zone_append": false, 00:12:00.866 "compare": false, 00:12:00.866 "compare_and_write": false, 00:12:00.866 "abort": false, 00:12:00.866 "seek_hole": false, 00:12:00.866 "seek_data": false, 00:12:00.866 "copy": false, 00:12:00.866 "nvme_iov_md": false 00:12:00.866 }, 00:12:00.866 "memory_domains": [ 00:12:00.866 { 00:12:00.866 "dma_device_id": "system", 00:12:00.866 "dma_device_type": 1 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.866 "dma_device_type": 2 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "dma_device_id": "system", 00:12:00.866 "dma_device_type": 1 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.866 "dma_device_type": 2 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "dma_device_id": "system", 00:12:00.866 "dma_device_type": 1 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.866 "dma_device_type": 2 00:12:00.866 } 00:12:00.866 ], 00:12:00.866 "driver_specific": { 00:12:00.866 "raid": { 00:12:00.866 "uuid": "8882e8bd-ba57-4c75-b6a3-2803c458932a", 00:12:00.866 "strip_size_kb": 64, 00:12:00.866 "state": "online", 00:12:00.866 "raid_level": "raid0", 00:12:00.866 "superblock": false, 00:12:00.866 "num_base_bdevs": 3, 00:12:00.866 "num_base_bdevs_discovered": 3, 00:12:00.866 "num_base_bdevs_operational": 3, 00:12:00.866 "base_bdevs_list": [ 00:12:00.866 { 00:12:00.866 "name": "NewBaseBdev", 00:12:00.866 "uuid": "ec83949d-db5b-4b59-a0c3-cd7e7f30ade5", 00:12:00.866 "is_configured": true, 00:12:00.866 "data_offset": 0, 00:12:00.866 "data_size": 65536 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "name": "BaseBdev2", 00:12:00.866 "uuid": "21a0a126-4f3a-4bd6-be33-7cf3d7ea49f5", 00:12:00.866 "is_configured": true, 00:12:00.866 "data_offset": 0, 00:12:00.866 "data_size": 65536 00:12:00.866 }, 00:12:00.866 { 00:12:00.866 "name": "BaseBdev3", 00:12:00.866 "uuid": "e9ceb9f6-ac70-4f68-a98e-7a7dee486efe", 00:12:00.866 "is_configured": true, 00:12:00.866 "data_offset": 0, 00:12:00.866 "data_size": 65536 00:12:00.866 } 00:12:00.866 ] 00:12:00.866 } 00:12:00.866 } 00:12:00.866 }' 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.866 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.866 BaseBdev2 00:12:00.866 BaseBdev3' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.867 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.124 [2024-12-05 19:31:54.331208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.124 [2024-12-05 19:31:54.331359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.124 [2024-12-05 19:31:54.331477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.124 [2024-12-05 19:31:54.331551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.124 [2024-12-05 19:31:54.331573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63797 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63797 ']' 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63797 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63797 00:12:01.124 killing process with pid 63797 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63797' 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63797 00:12:01.124 [2024-12-05 19:31:54.371154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.124 19:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63797 00:12:01.382 [2024-12-05 19:31:54.630491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.316 ************************************ 00:12:02.316 END TEST raid_state_function_test 00:12:02.316 ************************************ 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.316 00:12:02.316 real 0m11.748s 00:12:02.316 user 0m19.574s 00:12:02.316 sys 0m1.496s 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.316 19:31:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:02.316 19:31:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.316 19:31:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.316 19:31:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.316 ************************************ 00:12:02.316 START TEST raid_state_function_test_sb 00:12:02.316 ************************************ 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.316 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:02.317 Process raid pid: 64429 00:12:02.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64429 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64429' 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64429 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64429 ']' 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.317 19:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.575 [2024-12-05 19:31:55.852656] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:02.575 [2024-12-05 19:31:55.853888] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.837 [2024-12-05 19:31:56.030725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.838 [2024-12-05 19:31:56.168602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.096 [2024-12-05 19:31:56.374288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.096 [2024-12-05 19:31:56.374538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.663 [2024-12-05 19:31:56.807542] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.663 [2024-12-05 19:31:56.807774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.663 [2024-12-05 19:31:56.807806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.663 [2024-12-05 19:31:56.807826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.663 [2024-12-05 19:31:56.807838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.663 [2024-12-05 19:31:56.807854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.663 "name": "Existed_Raid", 00:12:03.663 "uuid": "8fefb818-c26e-43d9-83ad-e651ae498540", 00:12:03.663 "strip_size_kb": 64, 00:12:03.663 "state": "configuring", 00:12:03.663 "raid_level": "raid0", 00:12:03.663 "superblock": true, 00:12:03.663 "num_base_bdevs": 3, 00:12:03.663 "num_base_bdevs_discovered": 0, 00:12:03.663 "num_base_bdevs_operational": 3, 00:12:03.663 "base_bdevs_list": [ 00:12:03.663 { 00:12:03.663 "name": "BaseBdev1", 00:12:03.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.663 "is_configured": false, 00:12:03.663 "data_offset": 0, 00:12:03.663 "data_size": 0 00:12:03.663 }, 00:12:03.663 { 00:12:03.663 "name": "BaseBdev2", 00:12:03.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.663 "is_configured": false, 00:12:03.663 "data_offset": 0, 00:12:03.663 "data_size": 0 00:12:03.663 }, 00:12:03.663 { 00:12:03.663 "name": "BaseBdev3", 00:12:03.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.663 "is_configured": false, 00:12:03.663 "data_offset": 0, 00:12:03.663 "data_size": 0 00:12:03.663 } 00:12:03.663 ] 00:12:03.663 }' 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.663 19:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.922 [2024-12-05 19:31:57.299614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.922 [2024-12-05 19:31:57.299670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.922 [2024-12-05 19:31:57.307613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.922 [2024-12-05 19:31:57.307825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.922 [2024-12-05 19:31:57.307854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.922 [2024-12-05 19:31:57.307874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.922 [2024-12-05 19:31:57.307885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.922 [2024-12-05 19:31:57.307901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.922 [2024-12-05 19:31:57.352221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.922 BaseBdev1 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.922 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.180 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.180 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:04.180 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.180 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.180 [ 00:12:04.180 { 00:12:04.180 "name": "BaseBdev1", 00:12:04.180 "aliases": [ 00:12:04.181 "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc" 00:12:04.181 ], 00:12:04.181 "product_name": "Malloc disk", 00:12:04.181 "block_size": 512, 00:12:04.181 "num_blocks": 65536, 00:12:04.181 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:04.181 "assigned_rate_limits": { 00:12:04.181 "rw_ios_per_sec": 0, 00:12:04.181 "rw_mbytes_per_sec": 0, 00:12:04.181 "r_mbytes_per_sec": 0, 00:12:04.181 "w_mbytes_per_sec": 0 00:12:04.181 }, 00:12:04.181 "claimed": true, 00:12:04.181 "claim_type": "exclusive_write", 00:12:04.181 "zoned": false, 00:12:04.181 "supported_io_types": { 00:12:04.181 "read": true, 00:12:04.181 "write": true, 00:12:04.181 "unmap": true, 00:12:04.181 "flush": true, 00:12:04.181 "reset": true, 00:12:04.181 "nvme_admin": false, 00:12:04.181 "nvme_io": false, 00:12:04.181 "nvme_io_md": false, 00:12:04.181 "write_zeroes": true, 00:12:04.181 "zcopy": true, 00:12:04.181 "get_zone_info": false, 00:12:04.181 "zone_management": false, 00:12:04.181 "zone_append": false, 00:12:04.181 "compare": false, 00:12:04.181 "compare_and_write": false, 00:12:04.181 "abort": true, 00:12:04.181 "seek_hole": false, 00:12:04.181 "seek_data": false, 00:12:04.181 "copy": true, 00:12:04.181 "nvme_iov_md": false 00:12:04.181 }, 00:12:04.181 "memory_domains": [ 00:12:04.181 { 00:12:04.181 "dma_device_id": "system", 00:12:04.181 "dma_device_type": 1 00:12:04.181 }, 00:12:04.181 { 00:12:04.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.181 "dma_device_type": 2 00:12:04.181 } 00:12:04.181 ], 00:12:04.181 "driver_specific": {} 00:12:04.181 } 00:12:04.181 ] 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.181 "name": "Existed_Raid", 00:12:04.181 "uuid": "84522684-bffc-4f75-aec9-4bc360623bb9", 00:12:04.181 "strip_size_kb": 64, 00:12:04.181 "state": "configuring", 00:12:04.181 "raid_level": "raid0", 00:12:04.181 "superblock": true, 00:12:04.181 "num_base_bdevs": 3, 00:12:04.181 "num_base_bdevs_discovered": 1, 00:12:04.181 "num_base_bdevs_operational": 3, 00:12:04.181 "base_bdevs_list": [ 00:12:04.181 { 00:12:04.181 "name": "BaseBdev1", 00:12:04.181 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:04.181 "is_configured": true, 00:12:04.181 "data_offset": 2048, 00:12:04.181 "data_size": 63488 00:12:04.181 }, 00:12:04.181 { 00:12:04.181 "name": "BaseBdev2", 00:12:04.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.181 "is_configured": false, 00:12:04.181 "data_offset": 0, 00:12:04.181 "data_size": 0 00:12:04.181 }, 00:12:04.181 { 00:12:04.181 "name": "BaseBdev3", 00:12:04.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.181 "is_configured": false, 00:12:04.181 "data_offset": 0, 00:12:04.181 "data_size": 0 00:12:04.181 } 00:12:04.181 ] 00:12:04.181 }' 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.181 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 [2024-12-05 19:31:57.936434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.750 [2024-12-05 19:31:57.936499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 [2024-12-05 19:31:57.944483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.750 [2024-12-05 19:31:57.946940] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.750 [2024-12-05 19:31:57.947142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.750 [2024-12-05 19:31:57.947171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.750 [2024-12-05 19:31:57.947191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 19:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.750 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.750 "name": "Existed_Raid", 00:12:04.750 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:04.750 "strip_size_kb": 64, 00:12:04.750 "state": "configuring", 00:12:04.750 "raid_level": "raid0", 00:12:04.750 "superblock": true, 00:12:04.750 "num_base_bdevs": 3, 00:12:04.750 "num_base_bdevs_discovered": 1, 00:12:04.750 "num_base_bdevs_operational": 3, 00:12:04.750 "base_bdevs_list": [ 00:12:04.750 { 00:12:04.750 "name": "BaseBdev1", 00:12:04.750 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:04.750 "is_configured": true, 00:12:04.750 "data_offset": 2048, 00:12:04.750 "data_size": 63488 00:12:04.750 }, 00:12:04.750 { 00:12:04.750 "name": "BaseBdev2", 00:12:04.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.750 "is_configured": false, 00:12:04.750 "data_offset": 0, 00:12:04.750 "data_size": 0 00:12:04.750 }, 00:12:04.750 { 00:12:04.750 "name": "BaseBdev3", 00:12:04.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.750 "is_configured": false, 00:12:04.750 "data_offset": 0, 00:12:04.750 "data_size": 0 00:12:04.750 } 00:12:04.750 ] 00:12:04.750 }' 00:12:04.750 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.750 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.009 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.009 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.009 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.269 [2024-12-05 19:31:58.483071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.269 BaseBdev2 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.269 [ 00:12:05.269 { 00:12:05.269 "name": "BaseBdev2", 00:12:05.269 "aliases": [ 00:12:05.269 "81028935-78a4-48b2-9f89-8d3f70d1a55f" 00:12:05.269 ], 00:12:05.269 "product_name": "Malloc disk", 00:12:05.269 "block_size": 512, 00:12:05.269 "num_blocks": 65536, 00:12:05.269 "uuid": "81028935-78a4-48b2-9f89-8d3f70d1a55f", 00:12:05.269 "assigned_rate_limits": { 00:12:05.269 "rw_ios_per_sec": 0, 00:12:05.269 "rw_mbytes_per_sec": 0, 00:12:05.269 "r_mbytes_per_sec": 0, 00:12:05.269 "w_mbytes_per_sec": 0 00:12:05.269 }, 00:12:05.269 "claimed": true, 00:12:05.269 "claim_type": "exclusive_write", 00:12:05.269 "zoned": false, 00:12:05.269 "supported_io_types": { 00:12:05.269 "read": true, 00:12:05.269 "write": true, 00:12:05.269 "unmap": true, 00:12:05.269 "flush": true, 00:12:05.269 "reset": true, 00:12:05.269 "nvme_admin": false, 00:12:05.269 "nvme_io": false, 00:12:05.269 "nvme_io_md": false, 00:12:05.269 "write_zeroes": true, 00:12:05.269 "zcopy": true, 00:12:05.269 "get_zone_info": false, 00:12:05.269 "zone_management": false, 00:12:05.269 "zone_append": false, 00:12:05.269 "compare": false, 00:12:05.269 "compare_and_write": false, 00:12:05.269 "abort": true, 00:12:05.269 "seek_hole": false, 00:12:05.269 "seek_data": false, 00:12:05.269 "copy": true, 00:12:05.269 "nvme_iov_md": false 00:12:05.269 }, 00:12:05.269 "memory_domains": [ 00:12:05.269 { 00:12:05.269 "dma_device_id": "system", 00:12:05.269 "dma_device_type": 1 00:12:05.269 }, 00:12:05.269 { 00:12:05.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.269 "dma_device_type": 2 00:12:05.269 } 00:12:05.269 ], 00:12:05.269 "driver_specific": {} 00:12:05.269 } 00:12:05.269 ] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.269 "name": "Existed_Raid", 00:12:05.269 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:05.269 "strip_size_kb": 64, 00:12:05.269 "state": "configuring", 00:12:05.269 "raid_level": "raid0", 00:12:05.269 "superblock": true, 00:12:05.269 "num_base_bdevs": 3, 00:12:05.269 "num_base_bdevs_discovered": 2, 00:12:05.269 "num_base_bdevs_operational": 3, 00:12:05.269 "base_bdevs_list": [ 00:12:05.269 { 00:12:05.269 "name": "BaseBdev1", 00:12:05.269 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:05.269 "is_configured": true, 00:12:05.269 "data_offset": 2048, 00:12:05.269 "data_size": 63488 00:12:05.269 }, 00:12:05.269 { 00:12:05.269 "name": "BaseBdev2", 00:12:05.269 "uuid": "81028935-78a4-48b2-9f89-8d3f70d1a55f", 00:12:05.269 "is_configured": true, 00:12:05.269 "data_offset": 2048, 00:12:05.269 "data_size": 63488 00:12:05.269 }, 00:12:05.269 { 00:12:05.269 "name": "BaseBdev3", 00:12:05.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.269 "is_configured": false, 00:12:05.269 "data_offset": 0, 00:12:05.269 "data_size": 0 00:12:05.269 } 00:12:05.269 ] 00:12:05.269 }' 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.269 19:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.837 [2024-12-05 19:31:59.078431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.837 [2024-12-05 19:31:59.078807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.837 [2024-12-05 19:31:59.078840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:05.837 BaseBdev3 00:12:05.837 [2024-12-05 19:31:59.079180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:05.837 [2024-12-05 19:31:59.079398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.837 [2024-12-05 19:31:59.079415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.837 [2024-12-05 19:31:59.079597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.837 [ 00:12:05.837 { 00:12:05.837 "name": "BaseBdev3", 00:12:05.837 "aliases": [ 00:12:05.837 "406d6d1a-4ba8-4117-bbd4-0881fad678b9" 00:12:05.837 ], 00:12:05.837 "product_name": "Malloc disk", 00:12:05.837 "block_size": 512, 00:12:05.837 "num_blocks": 65536, 00:12:05.837 "uuid": "406d6d1a-4ba8-4117-bbd4-0881fad678b9", 00:12:05.837 "assigned_rate_limits": { 00:12:05.837 "rw_ios_per_sec": 0, 00:12:05.837 "rw_mbytes_per_sec": 0, 00:12:05.837 "r_mbytes_per_sec": 0, 00:12:05.837 "w_mbytes_per_sec": 0 00:12:05.837 }, 00:12:05.837 "claimed": true, 00:12:05.837 "claim_type": "exclusive_write", 00:12:05.837 "zoned": false, 00:12:05.837 "supported_io_types": { 00:12:05.837 "read": true, 00:12:05.837 "write": true, 00:12:05.837 "unmap": true, 00:12:05.837 "flush": true, 00:12:05.837 "reset": true, 00:12:05.837 "nvme_admin": false, 00:12:05.837 "nvme_io": false, 00:12:05.837 "nvme_io_md": false, 00:12:05.837 "write_zeroes": true, 00:12:05.837 "zcopy": true, 00:12:05.837 "get_zone_info": false, 00:12:05.837 "zone_management": false, 00:12:05.837 "zone_append": false, 00:12:05.837 "compare": false, 00:12:05.837 "compare_and_write": false, 00:12:05.837 "abort": true, 00:12:05.837 "seek_hole": false, 00:12:05.837 "seek_data": false, 00:12:05.837 "copy": true, 00:12:05.837 "nvme_iov_md": false 00:12:05.837 }, 00:12:05.837 "memory_domains": [ 00:12:05.837 { 00:12:05.837 "dma_device_id": "system", 00:12:05.837 "dma_device_type": 1 00:12:05.837 }, 00:12:05.837 { 00:12:05.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.837 "dma_device_type": 2 00:12:05.837 } 00:12:05.837 ], 00:12:05.837 "driver_specific": {} 00:12:05.837 } 00:12:05.837 ] 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.837 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.838 "name": "Existed_Raid", 00:12:05.838 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:05.838 "strip_size_kb": 64, 00:12:05.838 "state": "online", 00:12:05.838 "raid_level": "raid0", 00:12:05.838 "superblock": true, 00:12:05.838 "num_base_bdevs": 3, 00:12:05.838 "num_base_bdevs_discovered": 3, 00:12:05.838 "num_base_bdevs_operational": 3, 00:12:05.838 "base_bdevs_list": [ 00:12:05.838 { 00:12:05.838 "name": "BaseBdev1", 00:12:05.838 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:05.838 "is_configured": true, 00:12:05.838 "data_offset": 2048, 00:12:05.838 "data_size": 63488 00:12:05.838 }, 00:12:05.838 { 00:12:05.838 "name": "BaseBdev2", 00:12:05.838 "uuid": "81028935-78a4-48b2-9f89-8d3f70d1a55f", 00:12:05.838 "is_configured": true, 00:12:05.838 "data_offset": 2048, 00:12:05.838 "data_size": 63488 00:12:05.838 }, 00:12:05.838 { 00:12:05.838 "name": "BaseBdev3", 00:12:05.838 "uuid": "406d6d1a-4ba8-4117-bbd4-0881fad678b9", 00:12:05.838 "is_configured": true, 00:12:05.838 "data_offset": 2048, 00:12:05.838 "data_size": 63488 00:12:05.838 } 00:12:05.838 ] 00:12:05.838 }' 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.838 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.405 [2024-12-05 19:31:59.635039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.405 "name": "Existed_Raid", 00:12:06.405 "aliases": [ 00:12:06.405 "cad2b823-a78e-48ef-b955-f2c69c36bf56" 00:12:06.405 ], 00:12:06.405 "product_name": "Raid Volume", 00:12:06.405 "block_size": 512, 00:12:06.405 "num_blocks": 190464, 00:12:06.405 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:06.405 "assigned_rate_limits": { 00:12:06.405 "rw_ios_per_sec": 0, 00:12:06.405 "rw_mbytes_per_sec": 0, 00:12:06.405 "r_mbytes_per_sec": 0, 00:12:06.405 "w_mbytes_per_sec": 0 00:12:06.405 }, 00:12:06.405 "claimed": false, 00:12:06.405 "zoned": false, 00:12:06.405 "supported_io_types": { 00:12:06.405 "read": true, 00:12:06.405 "write": true, 00:12:06.405 "unmap": true, 00:12:06.405 "flush": true, 00:12:06.405 "reset": true, 00:12:06.405 "nvme_admin": false, 00:12:06.405 "nvme_io": false, 00:12:06.405 "nvme_io_md": false, 00:12:06.405 "write_zeroes": true, 00:12:06.405 "zcopy": false, 00:12:06.405 "get_zone_info": false, 00:12:06.405 "zone_management": false, 00:12:06.405 "zone_append": false, 00:12:06.405 "compare": false, 00:12:06.405 "compare_and_write": false, 00:12:06.405 "abort": false, 00:12:06.405 "seek_hole": false, 00:12:06.405 "seek_data": false, 00:12:06.405 "copy": false, 00:12:06.405 "nvme_iov_md": false 00:12:06.405 }, 00:12:06.405 "memory_domains": [ 00:12:06.405 { 00:12:06.405 "dma_device_id": "system", 00:12:06.405 "dma_device_type": 1 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.405 "dma_device_type": 2 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "dma_device_id": "system", 00:12:06.405 "dma_device_type": 1 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.405 "dma_device_type": 2 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "dma_device_id": "system", 00:12:06.405 "dma_device_type": 1 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.405 "dma_device_type": 2 00:12:06.405 } 00:12:06.405 ], 00:12:06.405 "driver_specific": { 00:12:06.405 "raid": { 00:12:06.405 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:06.405 "strip_size_kb": 64, 00:12:06.405 "state": "online", 00:12:06.405 "raid_level": "raid0", 00:12:06.405 "superblock": true, 00:12:06.405 "num_base_bdevs": 3, 00:12:06.405 "num_base_bdevs_discovered": 3, 00:12:06.405 "num_base_bdevs_operational": 3, 00:12:06.405 "base_bdevs_list": [ 00:12:06.405 { 00:12:06.405 "name": "BaseBdev1", 00:12:06.405 "uuid": "5c309f34-4f50-4a26-9fdc-c36e3aa3a0cc", 00:12:06.405 "is_configured": true, 00:12:06.405 "data_offset": 2048, 00:12:06.405 "data_size": 63488 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "name": "BaseBdev2", 00:12:06.405 "uuid": "81028935-78a4-48b2-9f89-8d3f70d1a55f", 00:12:06.405 "is_configured": true, 00:12:06.405 "data_offset": 2048, 00:12:06.405 "data_size": 63488 00:12:06.405 }, 00:12:06.405 { 00:12:06.405 "name": "BaseBdev3", 00:12:06.405 "uuid": "406d6d1a-4ba8-4117-bbd4-0881fad678b9", 00:12:06.405 "is_configured": true, 00:12:06.405 "data_offset": 2048, 00:12:06.405 "data_size": 63488 00:12:06.405 } 00:12:06.405 ] 00:12:06.405 } 00:12:06.405 } 00:12:06.405 }' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:06.405 BaseBdev2 00:12:06.405 BaseBdev3' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.405 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.406 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.665 19:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.665 [2024-12-05 19:31:59.942808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.665 [2024-12-05 19:31:59.942847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.665 [2024-12-05 19:31:59.942922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.665 "name": "Existed_Raid", 00:12:06.665 "uuid": "cad2b823-a78e-48ef-b955-f2c69c36bf56", 00:12:06.665 "strip_size_kb": 64, 00:12:06.665 "state": "offline", 00:12:06.665 "raid_level": "raid0", 00:12:06.665 "superblock": true, 00:12:06.665 "num_base_bdevs": 3, 00:12:06.665 "num_base_bdevs_discovered": 2, 00:12:06.665 "num_base_bdevs_operational": 2, 00:12:06.665 "base_bdevs_list": [ 00:12:06.665 { 00:12:06.665 "name": null, 00:12:06.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.665 "is_configured": false, 00:12:06.665 "data_offset": 0, 00:12:06.665 "data_size": 63488 00:12:06.665 }, 00:12:06.665 { 00:12:06.665 "name": "BaseBdev2", 00:12:06.665 "uuid": "81028935-78a4-48b2-9f89-8d3f70d1a55f", 00:12:06.665 "is_configured": true, 00:12:06.665 "data_offset": 2048, 00:12:06.665 "data_size": 63488 00:12:06.665 }, 00:12:06.665 { 00:12:06.665 "name": "BaseBdev3", 00:12:06.665 "uuid": "406d6d1a-4ba8-4117-bbd4-0881fad678b9", 00:12:06.665 "is_configured": true, 00:12:06.665 "data_offset": 2048, 00:12:06.665 "data_size": 63488 00:12:06.665 } 00:12:06.665 ] 00:12:06.665 }' 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.665 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.232 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 [2024-12-05 19:32:00.612833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 [2024-12-05 19:32:00.753631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.491 [2024-12-05 19:32:00.753717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 BaseBdev2 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 [ 00:12:07.750 { 00:12:07.750 "name": "BaseBdev2", 00:12:07.750 "aliases": [ 00:12:07.750 "3499be5a-faa2-4235-82fa-2e2120296782" 00:12:07.750 ], 00:12:07.750 "product_name": "Malloc disk", 00:12:07.750 "block_size": 512, 00:12:07.750 "num_blocks": 65536, 00:12:07.750 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:07.750 "assigned_rate_limits": { 00:12:07.750 "rw_ios_per_sec": 0, 00:12:07.750 "rw_mbytes_per_sec": 0, 00:12:07.750 "r_mbytes_per_sec": 0, 00:12:07.750 "w_mbytes_per_sec": 0 00:12:07.750 }, 00:12:07.750 "claimed": false, 00:12:07.750 "zoned": false, 00:12:07.750 "supported_io_types": { 00:12:07.750 "read": true, 00:12:07.750 "write": true, 00:12:07.750 "unmap": true, 00:12:07.750 "flush": true, 00:12:07.750 "reset": true, 00:12:07.750 "nvme_admin": false, 00:12:07.750 "nvme_io": false, 00:12:07.750 "nvme_io_md": false, 00:12:07.750 "write_zeroes": true, 00:12:07.750 "zcopy": true, 00:12:07.750 "get_zone_info": false, 00:12:07.750 "zone_management": false, 00:12:07.750 "zone_append": false, 00:12:07.750 "compare": false, 00:12:07.750 "compare_and_write": false, 00:12:07.750 "abort": true, 00:12:07.750 "seek_hole": false, 00:12:07.750 "seek_data": false, 00:12:07.750 "copy": true, 00:12:07.750 "nvme_iov_md": false 00:12:07.750 }, 00:12:07.750 "memory_domains": [ 00:12:07.750 { 00:12:07.750 "dma_device_id": "system", 00:12:07.750 "dma_device_type": 1 00:12:07.750 }, 00:12:07.750 { 00:12:07.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.750 "dma_device_type": 2 00:12:07.750 } 00:12:07.750 ], 00:12:07.750 "driver_specific": {} 00:12:07.750 } 00:12:07.750 ] 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.750 19:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 BaseBdev3 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.750 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.751 [ 00:12:07.751 { 00:12:07.751 "name": "BaseBdev3", 00:12:07.751 "aliases": [ 00:12:07.751 "d806f20e-0e10-4696-933f-902e04425d5e" 00:12:07.751 ], 00:12:07.751 "product_name": "Malloc disk", 00:12:07.751 "block_size": 512, 00:12:07.751 "num_blocks": 65536, 00:12:07.751 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:07.751 "assigned_rate_limits": { 00:12:07.751 "rw_ios_per_sec": 0, 00:12:07.751 "rw_mbytes_per_sec": 0, 00:12:07.751 "r_mbytes_per_sec": 0, 00:12:07.751 "w_mbytes_per_sec": 0 00:12:07.751 }, 00:12:07.751 "claimed": false, 00:12:07.751 "zoned": false, 00:12:07.751 "supported_io_types": { 00:12:07.751 "read": true, 00:12:07.751 "write": true, 00:12:07.751 "unmap": true, 00:12:07.751 "flush": true, 00:12:07.751 "reset": true, 00:12:07.751 "nvme_admin": false, 00:12:07.751 "nvme_io": false, 00:12:07.751 "nvme_io_md": false, 00:12:07.751 "write_zeroes": true, 00:12:07.751 "zcopy": true, 00:12:07.751 "get_zone_info": false, 00:12:07.751 "zone_management": false, 00:12:07.751 "zone_append": false, 00:12:07.751 "compare": false, 00:12:07.751 "compare_and_write": false, 00:12:07.751 "abort": true, 00:12:07.751 "seek_hole": false, 00:12:07.751 "seek_data": false, 00:12:07.751 "copy": true, 00:12:07.751 "nvme_iov_md": false 00:12:07.751 }, 00:12:07.751 "memory_domains": [ 00:12:07.751 { 00:12:07.751 "dma_device_id": "system", 00:12:07.751 "dma_device_type": 1 00:12:07.751 }, 00:12:07.751 { 00:12:07.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.751 "dma_device_type": 2 00:12:07.751 } 00:12:07.751 ], 00:12:07.751 "driver_specific": {} 00:12:07.751 } 00:12:07.751 ] 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.751 [2024-12-05 19:32:01.047990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.751 [2024-12-05 19:32:01.048046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.751 [2024-12-05 19:32:01.048080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.751 [2024-12-05 19:32:01.050461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.751 "name": "Existed_Raid", 00:12:07.751 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:07.751 "strip_size_kb": 64, 00:12:07.751 "state": "configuring", 00:12:07.751 "raid_level": "raid0", 00:12:07.751 "superblock": true, 00:12:07.751 "num_base_bdevs": 3, 00:12:07.751 "num_base_bdevs_discovered": 2, 00:12:07.751 "num_base_bdevs_operational": 3, 00:12:07.751 "base_bdevs_list": [ 00:12:07.751 { 00:12:07.751 "name": "BaseBdev1", 00:12:07.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.751 "is_configured": false, 00:12:07.751 "data_offset": 0, 00:12:07.751 "data_size": 0 00:12:07.751 }, 00:12:07.751 { 00:12:07.751 "name": "BaseBdev2", 00:12:07.751 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:07.751 "is_configured": true, 00:12:07.751 "data_offset": 2048, 00:12:07.751 "data_size": 63488 00:12:07.751 }, 00:12:07.751 { 00:12:07.751 "name": "BaseBdev3", 00:12:07.751 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:07.751 "is_configured": true, 00:12:07.751 "data_offset": 2048, 00:12:07.751 "data_size": 63488 00:12:07.751 } 00:12:07.751 ] 00:12:07.751 }' 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.751 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.318 [2024-12-05 19:32:01.592172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.318 "name": "Existed_Raid", 00:12:08.318 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:08.318 "strip_size_kb": 64, 00:12:08.318 "state": "configuring", 00:12:08.318 "raid_level": "raid0", 00:12:08.318 "superblock": true, 00:12:08.318 "num_base_bdevs": 3, 00:12:08.318 "num_base_bdevs_discovered": 1, 00:12:08.318 "num_base_bdevs_operational": 3, 00:12:08.318 "base_bdevs_list": [ 00:12:08.318 { 00:12:08.318 "name": "BaseBdev1", 00:12:08.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.318 "is_configured": false, 00:12:08.318 "data_offset": 0, 00:12:08.318 "data_size": 0 00:12:08.318 }, 00:12:08.318 { 00:12:08.318 "name": null, 00:12:08.318 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:08.318 "is_configured": false, 00:12:08.318 "data_offset": 0, 00:12:08.318 "data_size": 63488 00:12:08.318 }, 00:12:08.318 { 00:12:08.318 "name": "BaseBdev3", 00:12:08.318 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:08.318 "is_configured": true, 00:12:08.318 "data_offset": 2048, 00:12:08.318 "data_size": 63488 00:12:08.318 } 00:12:08.318 ] 00:12:08.318 }' 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.318 19:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 [2024-12-05 19:32:02.220198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.886 BaseBdev1 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 [ 00:12:08.886 { 00:12:08.886 "name": "BaseBdev1", 00:12:08.886 "aliases": [ 00:12:08.886 "4aab55ed-dfda-405d-b2ee-193603bb77f9" 00:12:08.886 ], 00:12:08.886 "product_name": "Malloc disk", 00:12:08.886 "block_size": 512, 00:12:08.886 "num_blocks": 65536, 00:12:08.886 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:08.886 "assigned_rate_limits": { 00:12:08.886 "rw_ios_per_sec": 0, 00:12:08.886 "rw_mbytes_per_sec": 0, 00:12:08.886 "r_mbytes_per_sec": 0, 00:12:08.886 "w_mbytes_per_sec": 0 00:12:08.886 }, 00:12:08.886 "claimed": true, 00:12:08.886 "claim_type": "exclusive_write", 00:12:08.886 "zoned": false, 00:12:08.886 "supported_io_types": { 00:12:08.886 "read": true, 00:12:08.886 "write": true, 00:12:08.886 "unmap": true, 00:12:08.886 "flush": true, 00:12:08.886 "reset": true, 00:12:08.886 "nvme_admin": false, 00:12:08.886 "nvme_io": false, 00:12:08.886 "nvme_io_md": false, 00:12:08.886 "write_zeroes": true, 00:12:08.886 "zcopy": true, 00:12:08.886 "get_zone_info": false, 00:12:08.886 "zone_management": false, 00:12:08.886 "zone_append": false, 00:12:08.886 "compare": false, 00:12:08.886 "compare_and_write": false, 00:12:08.886 "abort": true, 00:12:08.886 "seek_hole": false, 00:12:08.886 "seek_data": false, 00:12:08.886 "copy": true, 00:12:08.886 "nvme_iov_md": false 00:12:08.886 }, 00:12:08.886 "memory_domains": [ 00:12:08.886 { 00:12:08.886 "dma_device_id": "system", 00:12:08.886 "dma_device_type": 1 00:12:08.886 }, 00:12:08.886 { 00:12:08.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.886 "dma_device_type": 2 00:12:08.886 } 00:12:08.886 ], 00:12:08.886 "driver_specific": {} 00:12:08.886 } 00:12:08.886 ] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.886 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.886 "name": "Existed_Raid", 00:12:08.886 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:08.886 "strip_size_kb": 64, 00:12:08.886 "state": "configuring", 00:12:08.886 "raid_level": "raid0", 00:12:08.886 "superblock": true, 00:12:08.886 "num_base_bdevs": 3, 00:12:08.886 "num_base_bdevs_discovered": 2, 00:12:08.886 "num_base_bdevs_operational": 3, 00:12:08.886 "base_bdevs_list": [ 00:12:08.886 { 00:12:08.886 "name": "BaseBdev1", 00:12:08.886 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:08.886 "is_configured": true, 00:12:08.886 "data_offset": 2048, 00:12:08.887 "data_size": 63488 00:12:08.887 }, 00:12:08.887 { 00:12:08.887 "name": null, 00:12:08.887 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:08.887 "is_configured": false, 00:12:08.887 "data_offset": 0, 00:12:08.887 "data_size": 63488 00:12:08.887 }, 00:12:08.887 { 00:12:08.887 "name": "BaseBdev3", 00:12:08.887 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:08.887 "is_configured": true, 00:12:08.887 "data_offset": 2048, 00:12:08.887 "data_size": 63488 00:12:08.887 } 00:12:08.887 ] 00:12:08.887 }' 00:12:08.887 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.887 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.454 [2024-12-05 19:32:02.872473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.454 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.714 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.714 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.714 "name": "Existed_Raid", 00:12:09.714 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:09.714 "strip_size_kb": 64, 00:12:09.714 "state": "configuring", 00:12:09.714 "raid_level": "raid0", 00:12:09.714 "superblock": true, 00:12:09.714 "num_base_bdevs": 3, 00:12:09.714 "num_base_bdevs_discovered": 1, 00:12:09.714 "num_base_bdevs_operational": 3, 00:12:09.714 "base_bdevs_list": [ 00:12:09.714 { 00:12:09.714 "name": "BaseBdev1", 00:12:09.714 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:09.714 "is_configured": true, 00:12:09.714 "data_offset": 2048, 00:12:09.714 "data_size": 63488 00:12:09.714 }, 00:12:09.714 { 00:12:09.714 "name": null, 00:12:09.714 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:09.714 "is_configured": false, 00:12:09.714 "data_offset": 0, 00:12:09.714 "data_size": 63488 00:12:09.714 }, 00:12:09.714 { 00:12:09.714 "name": null, 00:12:09.714 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:09.714 "is_configured": false, 00:12:09.714 "data_offset": 0, 00:12:09.714 "data_size": 63488 00:12:09.714 } 00:12:09.714 ] 00:12:09.714 }' 00:12:09.714 19:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.714 19:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.973 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.973 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.973 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.973 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.973 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.233 [2024-12-05 19:32:03.424655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.233 "name": "Existed_Raid", 00:12:10.233 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:10.233 "strip_size_kb": 64, 00:12:10.233 "state": "configuring", 00:12:10.233 "raid_level": "raid0", 00:12:10.233 "superblock": true, 00:12:10.233 "num_base_bdevs": 3, 00:12:10.233 "num_base_bdevs_discovered": 2, 00:12:10.233 "num_base_bdevs_operational": 3, 00:12:10.233 "base_bdevs_list": [ 00:12:10.233 { 00:12:10.233 "name": "BaseBdev1", 00:12:10.233 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:10.233 "is_configured": true, 00:12:10.233 "data_offset": 2048, 00:12:10.233 "data_size": 63488 00:12:10.233 }, 00:12:10.233 { 00:12:10.233 "name": null, 00:12:10.233 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:10.233 "is_configured": false, 00:12:10.233 "data_offset": 0, 00:12:10.233 "data_size": 63488 00:12:10.233 }, 00:12:10.233 { 00:12:10.233 "name": "BaseBdev3", 00:12:10.233 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:10.233 "is_configured": true, 00:12:10.233 "data_offset": 2048, 00:12:10.233 "data_size": 63488 00:12:10.233 } 00:12:10.233 ] 00:12:10.233 }' 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.233 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.492 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.492 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.492 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.492 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.751 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.751 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.751 19:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.751 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.751 19:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.751 [2024-12-05 19:32:03.976904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.751 "name": "Existed_Raid", 00:12:10.751 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:10.751 "strip_size_kb": 64, 00:12:10.751 "state": "configuring", 00:12:10.751 "raid_level": "raid0", 00:12:10.751 "superblock": true, 00:12:10.751 "num_base_bdevs": 3, 00:12:10.751 "num_base_bdevs_discovered": 1, 00:12:10.751 "num_base_bdevs_operational": 3, 00:12:10.751 "base_bdevs_list": [ 00:12:10.751 { 00:12:10.751 "name": null, 00:12:10.751 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:10.751 "is_configured": false, 00:12:10.751 "data_offset": 0, 00:12:10.751 "data_size": 63488 00:12:10.751 }, 00:12:10.751 { 00:12:10.751 "name": null, 00:12:10.751 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:10.751 "is_configured": false, 00:12:10.751 "data_offset": 0, 00:12:10.751 "data_size": 63488 00:12:10.751 }, 00:12:10.751 { 00:12:10.751 "name": "BaseBdev3", 00:12:10.751 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:10.751 "is_configured": true, 00:12:10.751 "data_offset": 2048, 00:12:10.751 "data_size": 63488 00:12:10.751 } 00:12:10.751 ] 00:12:10.751 }' 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.751 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 [2024-12-05 19:32:04.605020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.318 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.318 "name": "Existed_Raid", 00:12:11.318 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:11.318 "strip_size_kb": 64, 00:12:11.318 "state": "configuring", 00:12:11.318 "raid_level": "raid0", 00:12:11.318 "superblock": true, 00:12:11.318 "num_base_bdevs": 3, 00:12:11.318 "num_base_bdevs_discovered": 2, 00:12:11.318 "num_base_bdevs_operational": 3, 00:12:11.318 "base_bdevs_list": [ 00:12:11.318 { 00:12:11.318 "name": null, 00:12:11.318 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:11.318 "is_configured": false, 00:12:11.318 "data_offset": 0, 00:12:11.318 "data_size": 63488 00:12:11.318 }, 00:12:11.318 { 00:12:11.319 "name": "BaseBdev2", 00:12:11.319 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:11.319 "is_configured": true, 00:12:11.319 "data_offset": 2048, 00:12:11.319 "data_size": 63488 00:12:11.319 }, 00:12:11.319 { 00:12:11.319 "name": "BaseBdev3", 00:12:11.319 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:11.319 "is_configured": true, 00:12:11.319 "data_offset": 2048, 00:12:11.319 "data_size": 63488 00:12:11.319 } 00:12:11.319 ] 00:12:11.319 }' 00:12:11.319 19:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.319 19:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aab55ed-dfda-405d-b2ee-193603bb77f9 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.884 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.884 [2024-12-05 19:32:05.262650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.884 [2024-12-05 19:32:05.262975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.884 [2024-12-05 19:32:05.262999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:11.884 [2024-12-05 19:32:05.263312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:11.884 [2024-12-05 19:32:05.263502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.884 [2024-12-05 19:32:05.263525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.884 [2024-12-05 19:32:05.263735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.884 NewBaseBdev 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 [ 00:12:11.885 { 00:12:11.885 "name": "NewBaseBdev", 00:12:11.885 "aliases": [ 00:12:11.885 "4aab55ed-dfda-405d-b2ee-193603bb77f9" 00:12:11.885 ], 00:12:11.885 "product_name": "Malloc disk", 00:12:11.885 "block_size": 512, 00:12:11.885 "num_blocks": 65536, 00:12:11.885 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:11.885 "assigned_rate_limits": { 00:12:11.885 "rw_ios_per_sec": 0, 00:12:11.885 "rw_mbytes_per_sec": 0, 00:12:11.885 "r_mbytes_per_sec": 0, 00:12:11.885 "w_mbytes_per_sec": 0 00:12:11.885 }, 00:12:11.885 "claimed": true, 00:12:11.885 "claim_type": "exclusive_write", 00:12:11.885 "zoned": false, 00:12:11.885 "supported_io_types": { 00:12:11.885 "read": true, 00:12:11.885 "write": true, 00:12:11.885 "unmap": true, 00:12:11.885 "flush": true, 00:12:11.885 "reset": true, 00:12:11.885 "nvme_admin": false, 00:12:11.885 "nvme_io": false, 00:12:11.885 "nvme_io_md": false, 00:12:11.885 "write_zeroes": true, 00:12:11.885 "zcopy": true, 00:12:11.885 "get_zone_info": false, 00:12:11.885 "zone_management": false, 00:12:11.885 "zone_append": false, 00:12:11.885 "compare": false, 00:12:11.885 "compare_and_write": false, 00:12:11.885 "abort": true, 00:12:11.885 "seek_hole": false, 00:12:11.885 "seek_data": false, 00:12:11.885 "copy": true, 00:12:11.885 "nvme_iov_md": false 00:12:11.885 }, 00:12:11.885 "memory_domains": [ 00:12:11.885 { 00:12:11.885 "dma_device_id": "system", 00:12:11.885 "dma_device_type": 1 00:12:11.885 }, 00:12:11.885 { 00:12:11.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.885 "dma_device_type": 2 00:12:11.885 } 00:12:11.885 ], 00:12:11.885 "driver_specific": {} 00:12:11.885 } 00:12:11.885 ] 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.885 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.143 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.143 "name": "Existed_Raid", 00:12:12.143 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:12.143 "strip_size_kb": 64, 00:12:12.143 "state": "online", 00:12:12.143 "raid_level": "raid0", 00:12:12.143 "superblock": true, 00:12:12.143 "num_base_bdevs": 3, 00:12:12.143 "num_base_bdevs_discovered": 3, 00:12:12.143 "num_base_bdevs_operational": 3, 00:12:12.143 "base_bdevs_list": [ 00:12:12.143 { 00:12:12.143 "name": "NewBaseBdev", 00:12:12.143 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:12.143 "is_configured": true, 00:12:12.143 "data_offset": 2048, 00:12:12.143 "data_size": 63488 00:12:12.143 }, 00:12:12.143 { 00:12:12.143 "name": "BaseBdev2", 00:12:12.143 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:12.143 "is_configured": true, 00:12:12.143 "data_offset": 2048, 00:12:12.143 "data_size": 63488 00:12:12.143 }, 00:12:12.143 { 00:12:12.143 "name": "BaseBdev3", 00:12:12.143 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:12.143 "is_configured": true, 00:12:12.143 "data_offset": 2048, 00:12:12.143 "data_size": 63488 00:12:12.143 } 00:12:12.143 ] 00:12:12.143 }' 00:12:12.143 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.143 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.402 [2024-12-05 19:32:05.791218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.402 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.402 "name": "Existed_Raid", 00:12:12.402 "aliases": [ 00:12:12.402 "05617e61-663c-4054-bb89-ab38522ef058" 00:12:12.402 ], 00:12:12.402 "product_name": "Raid Volume", 00:12:12.402 "block_size": 512, 00:12:12.402 "num_blocks": 190464, 00:12:12.402 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:12.402 "assigned_rate_limits": { 00:12:12.402 "rw_ios_per_sec": 0, 00:12:12.402 "rw_mbytes_per_sec": 0, 00:12:12.402 "r_mbytes_per_sec": 0, 00:12:12.402 "w_mbytes_per_sec": 0 00:12:12.402 }, 00:12:12.402 "claimed": false, 00:12:12.402 "zoned": false, 00:12:12.402 "supported_io_types": { 00:12:12.402 "read": true, 00:12:12.402 "write": true, 00:12:12.402 "unmap": true, 00:12:12.402 "flush": true, 00:12:12.402 "reset": true, 00:12:12.402 "nvme_admin": false, 00:12:12.402 "nvme_io": false, 00:12:12.402 "nvme_io_md": false, 00:12:12.402 "write_zeroes": true, 00:12:12.402 "zcopy": false, 00:12:12.402 "get_zone_info": false, 00:12:12.402 "zone_management": false, 00:12:12.402 "zone_append": false, 00:12:12.402 "compare": false, 00:12:12.402 "compare_and_write": false, 00:12:12.402 "abort": false, 00:12:12.402 "seek_hole": false, 00:12:12.402 "seek_data": false, 00:12:12.402 "copy": false, 00:12:12.402 "nvme_iov_md": false 00:12:12.402 }, 00:12:12.402 "memory_domains": [ 00:12:12.402 { 00:12:12.402 "dma_device_id": "system", 00:12:12.402 "dma_device_type": 1 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.402 "dma_device_type": 2 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "dma_device_id": "system", 00:12:12.402 "dma_device_type": 1 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.402 "dma_device_type": 2 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "dma_device_id": "system", 00:12:12.402 "dma_device_type": 1 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.402 "dma_device_type": 2 00:12:12.402 } 00:12:12.402 ], 00:12:12.402 "driver_specific": { 00:12:12.402 "raid": { 00:12:12.402 "uuid": "05617e61-663c-4054-bb89-ab38522ef058", 00:12:12.402 "strip_size_kb": 64, 00:12:12.402 "state": "online", 00:12:12.402 "raid_level": "raid0", 00:12:12.402 "superblock": true, 00:12:12.402 "num_base_bdevs": 3, 00:12:12.402 "num_base_bdevs_discovered": 3, 00:12:12.402 "num_base_bdevs_operational": 3, 00:12:12.402 "base_bdevs_list": [ 00:12:12.402 { 00:12:12.402 "name": "NewBaseBdev", 00:12:12.402 "uuid": "4aab55ed-dfda-405d-b2ee-193603bb77f9", 00:12:12.402 "is_configured": true, 00:12:12.402 "data_offset": 2048, 00:12:12.402 "data_size": 63488 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "name": "BaseBdev2", 00:12:12.402 "uuid": "3499be5a-faa2-4235-82fa-2e2120296782", 00:12:12.402 "is_configured": true, 00:12:12.402 "data_offset": 2048, 00:12:12.402 "data_size": 63488 00:12:12.402 }, 00:12:12.402 { 00:12:12.402 "name": "BaseBdev3", 00:12:12.402 "uuid": "d806f20e-0e10-4696-933f-902e04425d5e", 00:12:12.402 "is_configured": true, 00:12:12.402 "data_offset": 2048, 00:12:12.402 "data_size": 63488 00:12:12.403 } 00:12:12.403 ] 00:12:12.403 } 00:12:12.403 } 00:12:12.403 }' 00:12:12.403 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.662 BaseBdev2 00:12:12.662 BaseBdev3' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.662 19:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.662 [2024-12-05 19:32:06.082970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.662 [2024-12-05 19:32:06.083009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.662 [2024-12-05 19:32:06.083143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.662 [2024-12-05 19:32:06.083255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.662 [2024-12-05 19:32:06.083278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64429 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64429 ']' 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64429 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.662 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64429 00:12:12.921 killing process with pid 64429 00:12:12.921 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.921 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.921 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64429' 00:12:12.921 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64429 00:12:12.921 [2024-12-05 19:32:06.123501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.921 19:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64429 00:12:13.179 [2024-12-05 19:32:06.416977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.556 19:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.556 00:12:14.556 real 0m11.871s 00:12:14.556 user 0m19.555s 00:12:14.556 sys 0m1.573s 00:12:14.556 19:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.556 19:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 ************************************ 00:12:14.556 END TEST raid_state_function_test_sb 00:12:14.556 ************************************ 00:12:14.556 19:32:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:14.556 19:32:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:14.556 19:32:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.556 19:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 ************************************ 00:12:14.556 START TEST raid_superblock_test 00:12:14.556 ************************************ 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65066 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65066 00:12:14.556 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65066 ']' 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.557 19:32:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.557 [2024-12-05 19:32:07.759297] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:14.557 [2024-12-05 19:32:07.759486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65066 ] 00:12:14.557 [2024-12-05 19:32:07.944670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.815 [2024-12-05 19:32:08.093367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.074 [2024-12-05 19:32:08.317489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.074 [2024-12-05 19:32:08.317600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.333 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 malloc1 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 [2024-12-05 19:32:08.824063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.592 [2024-12-05 19:32:08.824164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.592 [2024-12-05 19:32:08.824204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.592 [2024-12-05 19:32:08.824224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.592 [2024-12-05 19:32:08.827339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.592 [2024-12-05 19:32:08.827388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.592 pt1 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 malloc2 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 [2024-12-05 19:32:08.882126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.592 [2024-12-05 19:32:08.882221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.592 [2024-12-05 19:32:08.882267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.592 [2024-12-05 19:32:08.882286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.592 [2024-12-05 19:32:08.885329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.592 [2024-12-05 19:32:08.885379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.592 pt2 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 malloc3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 [2024-12-05 19:32:08.953411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.592 [2024-12-05 19:32:08.953539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.592 [2024-12-05 19:32:08.953582] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:15.592 [2024-12-05 19:32:08.953602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.592 [2024-12-05 19:32:08.956979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.592 [2024-12-05 19:32:08.957046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.592 pt3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 [2024-12-05 19:32:08.961529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.592 [2024-12-05 19:32:08.964339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.592 [2024-12-05 19:32:08.964478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.592 [2024-12-05 19:32:08.964788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.592 [2024-12-05 19:32:08.964824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:15.592 [2024-12-05 19:32:08.965196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.592 [2024-12-05 19:32:08.965480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.592 [2024-12-05 19:32:08.965508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.592 [2024-12-05 19:32:08.965817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.592 19:32:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.592 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.593 "name": "raid_bdev1", 00:12:15.593 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:15.593 "strip_size_kb": 64, 00:12:15.593 "state": "online", 00:12:15.593 "raid_level": "raid0", 00:12:15.593 "superblock": true, 00:12:15.593 "num_base_bdevs": 3, 00:12:15.593 "num_base_bdevs_discovered": 3, 00:12:15.593 "num_base_bdevs_operational": 3, 00:12:15.593 "base_bdevs_list": [ 00:12:15.593 { 00:12:15.593 "name": "pt1", 00:12:15.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.593 "is_configured": true, 00:12:15.593 "data_offset": 2048, 00:12:15.593 "data_size": 63488 00:12:15.593 }, 00:12:15.593 { 00:12:15.593 "name": "pt2", 00:12:15.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.593 "is_configured": true, 00:12:15.593 "data_offset": 2048, 00:12:15.593 "data_size": 63488 00:12:15.593 }, 00:12:15.593 { 00:12:15.593 "name": "pt3", 00:12:15.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.593 "is_configured": true, 00:12:15.593 "data_offset": 2048, 00:12:15.593 "data_size": 63488 00:12:15.593 } 00:12:15.593 ] 00:12:15.593 }' 00:12:15.593 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.593 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 [2024-12-05 19:32:09.466501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.159 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.159 "name": "raid_bdev1", 00:12:16.159 "aliases": [ 00:12:16.159 "36785447-8fe8-4222-989a-003433d57871" 00:12:16.159 ], 00:12:16.159 "product_name": "Raid Volume", 00:12:16.159 "block_size": 512, 00:12:16.159 "num_blocks": 190464, 00:12:16.159 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:16.159 "assigned_rate_limits": { 00:12:16.159 "rw_ios_per_sec": 0, 00:12:16.159 "rw_mbytes_per_sec": 0, 00:12:16.159 "r_mbytes_per_sec": 0, 00:12:16.159 "w_mbytes_per_sec": 0 00:12:16.159 }, 00:12:16.159 "claimed": false, 00:12:16.159 "zoned": false, 00:12:16.159 "supported_io_types": { 00:12:16.159 "read": true, 00:12:16.159 "write": true, 00:12:16.159 "unmap": true, 00:12:16.159 "flush": true, 00:12:16.159 "reset": true, 00:12:16.159 "nvme_admin": false, 00:12:16.159 "nvme_io": false, 00:12:16.159 "nvme_io_md": false, 00:12:16.159 "write_zeroes": true, 00:12:16.159 "zcopy": false, 00:12:16.159 "get_zone_info": false, 00:12:16.159 "zone_management": false, 00:12:16.159 "zone_append": false, 00:12:16.159 "compare": false, 00:12:16.159 "compare_and_write": false, 00:12:16.159 "abort": false, 00:12:16.159 "seek_hole": false, 00:12:16.159 "seek_data": false, 00:12:16.159 "copy": false, 00:12:16.159 "nvme_iov_md": false 00:12:16.159 }, 00:12:16.159 "memory_domains": [ 00:12:16.159 { 00:12:16.159 "dma_device_id": "system", 00:12:16.159 "dma_device_type": 1 00:12:16.159 }, 00:12:16.159 { 00:12:16.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.159 "dma_device_type": 2 00:12:16.159 }, 00:12:16.159 { 00:12:16.159 "dma_device_id": "system", 00:12:16.159 "dma_device_type": 1 00:12:16.159 }, 00:12:16.159 { 00:12:16.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.159 "dma_device_type": 2 00:12:16.159 }, 00:12:16.159 { 00:12:16.160 "dma_device_id": "system", 00:12:16.160 "dma_device_type": 1 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.160 "dma_device_type": 2 00:12:16.160 } 00:12:16.160 ], 00:12:16.160 "driver_specific": { 00:12:16.160 "raid": { 00:12:16.160 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:16.160 "strip_size_kb": 64, 00:12:16.160 "state": "online", 00:12:16.160 "raid_level": "raid0", 00:12:16.160 "superblock": true, 00:12:16.160 "num_base_bdevs": 3, 00:12:16.160 "num_base_bdevs_discovered": 3, 00:12:16.160 "num_base_bdevs_operational": 3, 00:12:16.160 "base_bdevs_list": [ 00:12:16.160 { 00:12:16.160 "name": "pt1", 00:12:16.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.160 "is_configured": true, 00:12:16.160 "data_offset": 2048, 00:12:16.160 "data_size": 63488 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "name": "pt2", 00:12:16.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.160 "is_configured": true, 00:12:16.160 "data_offset": 2048, 00:12:16.160 "data_size": 63488 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "name": "pt3", 00:12:16.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.160 "is_configured": true, 00:12:16.160 "data_offset": 2048, 00:12:16.160 "data_size": 63488 00:12:16.160 } 00:12:16.160 ] 00:12:16.160 } 00:12:16.160 } 00:12:16.160 }' 00:12:16.160 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.160 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.160 pt2 00:12:16.160 pt3' 00:12:16.160 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:16.418 [2024-12-05 19:32:09.790529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=36785447-8fe8-4222-989a-003433d57871 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 36785447-8fe8-4222-989a-003433d57871 ']' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 [2024-12-05 19:32:09.838160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.418 [2024-12-05 19:32:09.838223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.418 [2024-12-05 19:32:09.838341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.418 [2024-12-05 19:32:09.838463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.418 [2024-12-05 19:32:09.838485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.418 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.677 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.678 [2024-12-05 19:32:09.982264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:16.678 [2024-12-05 19:32:09.985102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:16.678 [2024-12-05 19:32:09.985198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:16.678 [2024-12-05 19:32:09.985293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:16.678 [2024-12-05 19:32:09.985417] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:16.678 [2024-12-05 19:32:09.985466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:16.678 [2024-12-05 19:32:09.985502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.678 [2024-12-05 19:32:09.985521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:16.678 request: 00:12:16.678 { 00:12:16.678 "name": "raid_bdev1", 00:12:16.678 "raid_level": "raid0", 00:12:16.678 "base_bdevs": [ 00:12:16.678 "malloc1", 00:12:16.678 "malloc2", 00:12:16.678 "malloc3" 00:12:16.678 ], 00:12:16.678 "strip_size_kb": 64, 00:12:16.678 "superblock": false, 00:12:16.678 "method": "bdev_raid_create", 00:12:16.678 "req_id": 1 00:12:16.678 } 00:12:16.678 Got JSON-RPC error response 00:12:16.678 response: 00:12:16.678 { 00:12:16.678 "code": -17, 00:12:16.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:16.678 } 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.678 19:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.678 [2024-12-05 19:32:10.046304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:16.678 [2024-12-05 19:32:10.046421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.678 [2024-12-05 19:32:10.046463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:16.678 [2024-12-05 19:32:10.046481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.678 [2024-12-05 19:32:10.049907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.678 [2024-12-05 19:32:10.049953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:16.678 [2024-12-05 19:32:10.050081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:16.678 [2024-12-05 19:32:10.050164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:16.678 pt1 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.678 "name": "raid_bdev1", 00:12:16.678 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:16.678 "strip_size_kb": 64, 00:12:16.678 "state": "configuring", 00:12:16.678 "raid_level": "raid0", 00:12:16.678 "superblock": true, 00:12:16.678 "num_base_bdevs": 3, 00:12:16.678 "num_base_bdevs_discovered": 1, 00:12:16.678 "num_base_bdevs_operational": 3, 00:12:16.678 "base_bdevs_list": [ 00:12:16.678 { 00:12:16.678 "name": "pt1", 00:12:16.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.678 "is_configured": true, 00:12:16.678 "data_offset": 2048, 00:12:16.678 "data_size": 63488 00:12:16.678 }, 00:12:16.678 { 00:12:16.678 "name": null, 00:12:16.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.678 "is_configured": false, 00:12:16.678 "data_offset": 2048, 00:12:16.678 "data_size": 63488 00:12:16.678 }, 00:12:16.678 { 00:12:16.678 "name": null, 00:12:16.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.678 "is_configured": false, 00:12:16.678 "data_offset": 2048, 00:12:16.678 "data_size": 63488 00:12:16.678 } 00:12:16.678 ] 00:12:16.678 }' 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.678 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 [2024-12-05 19:32:10.554676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.247 [2024-12-05 19:32:10.554839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.247 [2024-12-05 19:32:10.554892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:17.247 [2024-12-05 19:32:10.554911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.247 [2024-12-05 19:32:10.555601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.247 [2024-12-05 19:32:10.555672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.247 [2024-12-05 19:32:10.555836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.247 [2024-12-05 19:32:10.555889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.247 pt2 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 [2024-12-05 19:32:10.562589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.247 "name": "raid_bdev1", 00:12:17.247 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:17.247 "strip_size_kb": 64, 00:12:17.247 "state": "configuring", 00:12:17.247 "raid_level": "raid0", 00:12:17.247 "superblock": true, 00:12:17.247 "num_base_bdevs": 3, 00:12:17.247 "num_base_bdevs_discovered": 1, 00:12:17.247 "num_base_bdevs_operational": 3, 00:12:17.247 "base_bdevs_list": [ 00:12:17.247 { 00:12:17.247 "name": "pt1", 00:12:17.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.247 "is_configured": true, 00:12:17.247 "data_offset": 2048, 00:12:17.247 "data_size": 63488 00:12:17.247 }, 00:12:17.247 { 00:12:17.247 "name": null, 00:12:17.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.247 "is_configured": false, 00:12:17.247 "data_offset": 0, 00:12:17.247 "data_size": 63488 00:12:17.247 }, 00:12:17.247 { 00:12:17.247 "name": null, 00:12:17.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.247 "is_configured": false, 00:12:17.247 "data_offset": 2048, 00:12:17.247 "data_size": 63488 00:12:17.247 } 00:12:17.247 ] 00:12:17.247 }' 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.247 19:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.816 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:17.816 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.816 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.816 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.816 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.816 [2024-12-05 19:32:11.074827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.816 [2024-12-05 19:32:11.074978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.816 [2024-12-05 19:32:11.075014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:17.817 [2024-12-05 19:32:11.075036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.817 [2024-12-05 19:32:11.075825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.817 [2024-12-05 19:32:11.075874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.817 [2024-12-05 19:32:11.076035] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.817 [2024-12-05 19:32:11.076081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.817 pt2 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.817 [2024-12-05 19:32:11.082678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.817 [2024-12-05 19:32:11.082756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.817 [2024-12-05 19:32:11.082794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:17.817 [2024-12-05 19:32:11.082814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.817 [2024-12-05 19:32:11.083330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.817 [2024-12-05 19:32:11.083417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.817 [2024-12-05 19:32:11.083503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.817 [2024-12-05 19:32:11.083543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.817 [2024-12-05 19:32:11.083742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.817 [2024-12-05 19:32:11.083777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:17.817 [2024-12-05 19:32:11.084186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.817 [2024-12-05 19:32:11.084480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.817 [2024-12-05 19:32:11.084507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:17.817 [2024-12-05 19:32:11.084694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.817 pt3 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.817 "name": "raid_bdev1", 00:12:17.817 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:17.817 "strip_size_kb": 64, 00:12:17.817 "state": "online", 00:12:17.817 "raid_level": "raid0", 00:12:17.817 "superblock": true, 00:12:17.817 "num_base_bdevs": 3, 00:12:17.817 "num_base_bdevs_discovered": 3, 00:12:17.817 "num_base_bdevs_operational": 3, 00:12:17.817 "base_bdevs_list": [ 00:12:17.817 { 00:12:17.817 "name": "pt1", 00:12:17.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.817 "is_configured": true, 00:12:17.817 "data_offset": 2048, 00:12:17.817 "data_size": 63488 00:12:17.817 }, 00:12:17.817 { 00:12:17.817 "name": "pt2", 00:12:17.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.817 "is_configured": true, 00:12:17.817 "data_offset": 2048, 00:12:17.817 "data_size": 63488 00:12:17.817 }, 00:12:17.817 { 00:12:17.817 "name": "pt3", 00:12:17.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.817 "is_configured": true, 00:12:17.817 "data_offset": 2048, 00:12:17.817 "data_size": 63488 00:12:17.817 } 00:12:17.817 ] 00:12:17.817 }' 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.817 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.386 [2024-12-05 19:32:11.647405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.386 "name": "raid_bdev1", 00:12:18.386 "aliases": [ 00:12:18.386 "36785447-8fe8-4222-989a-003433d57871" 00:12:18.386 ], 00:12:18.386 "product_name": "Raid Volume", 00:12:18.386 "block_size": 512, 00:12:18.386 "num_blocks": 190464, 00:12:18.386 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:18.386 "assigned_rate_limits": { 00:12:18.386 "rw_ios_per_sec": 0, 00:12:18.386 "rw_mbytes_per_sec": 0, 00:12:18.386 "r_mbytes_per_sec": 0, 00:12:18.386 "w_mbytes_per_sec": 0 00:12:18.386 }, 00:12:18.386 "claimed": false, 00:12:18.386 "zoned": false, 00:12:18.386 "supported_io_types": { 00:12:18.386 "read": true, 00:12:18.386 "write": true, 00:12:18.386 "unmap": true, 00:12:18.386 "flush": true, 00:12:18.386 "reset": true, 00:12:18.386 "nvme_admin": false, 00:12:18.386 "nvme_io": false, 00:12:18.386 "nvme_io_md": false, 00:12:18.386 "write_zeroes": true, 00:12:18.386 "zcopy": false, 00:12:18.386 "get_zone_info": false, 00:12:18.386 "zone_management": false, 00:12:18.386 "zone_append": false, 00:12:18.386 "compare": false, 00:12:18.386 "compare_and_write": false, 00:12:18.386 "abort": false, 00:12:18.386 "seek_hole": false, 00:12:18.386 "seek_data": false, 00:12:18.386 "copy": false, 00:12:18.386 "nvme_iov_md": false 00:12:18.386 }, 00:12:18.386 "memory_domains": [ 00:12:18.386 { 00:12:18.386 "dma_device_id": "system", 00:12:18.386 "dma_device_type": 1 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.386 "dma_device_type": 2 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "dma_device_id": "system", 00:12:18.386 "dma_device_type": 1 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.386 "dma_device_type": 2 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "dma_device_id": "system", 00:12:18.386 "dma_device_type": 1 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.386 "dma_device_type": 2 00:12:18.386 } 00:12:18.386 ], 00:12:18.386 "driver_specific": { 00:12:18.386 "raid": { 00:12:18.386 "uuid": "36785447-8fe8-4222-989a-003433d57871", 00:12:18.386 "strip_size_kb": 64, 00:12:18.386 "state": "online", 00:12:18.386 "raid_level": "raid0", 00:12:18.386 "superblock": true, 00:12:18.386 "num_base_bdevs": 3, 00:12:18.386 "num_base_bdevs_discovered": 3, 00:12:18.386 "num_base_bdevs_operational": 3, 00:12:18.386 "base_bdevs_list": [ 00:12:18.386 { 00:12:18.386 "name": "pt1", 00:12:18.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.386 "is_configured": true, 00:12:18.386 "data_offset": 2048, 00:12:18.386 "data_size": 63488 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "name": "pt2", 00:12:18.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.386 "is_configured": true, 00:12:18.386 "data_offset": 2048, 00:12:18.386 "data_size": 63488 00:12:18.386 }, 00:12:18.386 { 00:12:18.386 "name": "pt3", 00:12:18.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.386 "is_configured": true, 00:12:18.386 "data_offset": 2048, 00:12:18.386 "data_size": 63488 00:12:18.386 } 00:12:18.386 ] 00:12:18.386 } 00:12:18.386 } 00:12:18.386 }' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.386 pt2 00:12:18.386 pt3' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.386 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:18.708 [2024-12-05 19:32:11.967563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.708 19:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 36785447-8fe8-4222-989a-003433d57871 '!=' 36785447-8fe8-4222-989a-003433d57871 ']' 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65066 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65066 ']' 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65066 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65066 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.708 killing process with pid 65066 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65066' 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65066 00:12:18.708 [2024-12-05 19:32:12.044131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.708 19:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65066 00:12:18.708 [2024-12-05 19:32:12.044325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.708 [2024-12-05 19:32:12.044437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.708 [2024-12-05 19:32:12.044489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:18.982 [2024-12-05 19:32:12.332896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.360 19:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:20.360 00:12:20.360 real 0m5.824s 00:12:20.360 user 0m8.647s 00:12:20.360 sys 0m0.894s 00:12:20.360 19:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.360 19:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.360 ************************************ 00:12:20.360 END TEST raid_superblock_test 00:12:20.360 ************************************ 00:12:20.360 19:32:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:20.360 19:32:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.360 19:32:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.360 19:32:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.360 ************************************ 00:12:20.360 START TEST raid_read_error_test 00:12:20.360 ************************************ 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:20.360 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cyB6cjZ4Ze 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65330 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65330 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65330 ']' 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.361 19:32:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.361 [2024-12-05 19:32:13.644974] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:20.361 [2024-12-05 19:32:13.645154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65330 ] 00:12:20.619 [2024-12-05 19:32:13.830124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.619 [2024-12-05 19:32:13.982575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.878 [2024-12-05 19:32:14.208992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.878 [2024-12-05 19:32:14.209110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 BaseBdev1_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 true 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 [2024-12-05 19:32:14.799514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.446 [2024-12-05 19:32:14.799626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.446 [2024-12-05 19:32:14.799675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:21.446 [2024-12-05 19:32:14.799714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.446 [2024-12-05 19:32:14.802848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.446 [2024-12-05 19:32:14.802911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.446 BaseBdev1 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 BaseBdev2_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 true 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 [2024-12-05 19:32:14.859676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.446 [2024-12-05 19:32:14.859793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.446 [2024-12-05 19:32:14.859826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:21.446 [2024-12-05 19:32:14.859852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.446 [2024-12-05 19:32:14.862950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.446 [2024-12-05 19:32:14.863008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.446 BaseBdev2 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.705 BaseBdev3_malloc 00:12:21.705 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.705 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:21.705 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.705 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.706 true 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.706 [2024-12-05 19:32:14.927619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.706 [2024-12-05 19:32:14.927766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.706 [2024-12-05 19:32:14.927803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:21.706 [2024-12-05 19:32:14.927826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.706 [2024-12-05 19:32:14.930982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.706 [2024-12-05 19:32:14.931090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.706 BaseBdev3 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.706 [2024-12-05 19:32:14.935832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.706 [2024-12-05 19:32:14.938591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.706 [2024-12-05 19:32:14.938707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.706 [2024-12-05 19:32:14.939060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.706 [2024-12-05 19:32:14.939085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.706 [2024-12-05 19:32:14.939425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:21.706 [2024-12-05 19:32:14.939773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.706 [2024-12-05 19:32:14.939810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:21.706 [2024-12-05 19:32:14.940104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.706 "name": "raid_bdev1", 00:12:21.706 "uuid": "0533ca7c-55fb-4ff1-9a08-6e214504aba7", 00:12:21.706 "strip_size_kb": 64, 00:12:21.706 "state": "online", 00:12:21.706 "raid_level": "raid0", 00:12:21.706 "superblock": true, 00:12:21.706 "num_base_bdevs": 3, 00:12:21.706 "num_base_bdevs_discovered": 3, 00:12:21.706 "num_base_bdevs_operational": 3, 00:12:21.706 "base_bdevs_list": [ 00:12:21.706 { 00:12:21.706 "name": "BaseBdev1", 00:12:21.706 "uuid": "18ad18b2-37d6-5c2c-9f35-e03d3cc3c2fc", 00:12:21.706 "is_configured": true, 00:12:21.706 "data_offset": 2048, 00:12:21.706 "data_size": 63488 00:12:21.706 }, 00:12:21.706 { 00:12:21.706 "name": "BaseBdev2", 00:12:21.706 "uuid": "fab783b9-2b62-52e2-8cc7-51e3f78188a3", 00:12:21.706 "is_configured": true, 00:12:21.706 "data_offset": 2048, 00:12:21.706 "data_size": 63488 00:12:21.706 }, 00:12:21.706 { 00:12:21.706 "name": "BaseBdev3", 00:12:21.706 "uuid": "8d191151-2f12-5e60-9682-5b2e8f30776e", 00:12:21.706 "is_configured": true, 00:12:21.706 "data_offset": 2048, 00:12:21.706 "data_size": 63488 00:12:21.706 } 00:12:21.706 ] 00:12:21.706 }' 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.706 19:32:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.272 19:32:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:22.272 19:32:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:22.272 [2024-12-05 19:32:15.505728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.208 "name": "raid_bdev1", 00:12:23.208 "uuid": "0533ca7c-55fb-4ff1-9a08-6e214504aba7", 00:12:23.208 "strip_size_kb": 64, 00:12:23.208 "state": "online", 00:12:23.208 "raid_level": "raid0", 00:12:23.208 "superblock": true, 00:12:23.208 "num_base_bdevs": 3, 00:12:23.208 "num_base_bdevs_discovered": 3, 00:12:23.208 "num_base_bdevs_operational": 3, 00:12:23.208 "base_bdevs_list": [ 00:12:23.208 { 00:12:23.208 "name": "BaseBdev1", 00:12:23.208 "uuid": "18ad18b2-37d6-5c2c-9f35-e03d3cc3c2fc", 00:12:23.208 "is_configured": true, 00:12:23.208 "data_offset": 2048, 00:12:23.208 "data_size": 63488 00:12:23.208 }, 00:12:23.208 { 00:12:23.208 "name": "BaseBdev2", 00:12:23.208 "uuid": "fab783b9-2b62-52e2-8cc7-51e3f78188a3", 00:12:23.208 "is_configured": true, 00:12:23.208 "data_offset": 2048, 00:12:23.208 "data_size": 63488 00:12:23.208 }, 00:12:23.208 { 00:12:23.208 "name": "BaseBdev3", 00:12:23.208 "uuid": "8d191151-2f12-5e60-9682-5b2e8f30776e", 00:12:23.208 "is_configured": true, 00:12:23.208 "data_offset": 2048, 00:12:23.208 "data_size": 63488 00:12:23.208 } 00:12:23.208 ] 00:12:23.208 }' 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.208 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.775 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.775 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.775 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.775 [2024-12-05 19:32:16.945406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.775 [2024-12-05 19:32:16.945845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.775 [2024-12-05 19:32:16.949785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.775 [2024-12-05 19:32:16.950091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.776 [2024-12-05 19:32:16.950311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.776 [2024-12-05 19:32:16.950501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:23.776 { 00:12:23.776 "results": [ 00:12:23.776 { 00:12:23.776 "job": "raid_bdev1", 00:12:23.776 "core_mask": "0x1", 00:12:23.776 "workload": "randrw", 00:12:23.776 "percentage": 50, 00:12:23.776 "status": "finished", 00:12:23.776 "queue_depth": 1, 00:12:23.776 "io_size": 131072, 00:12:23.776 "runtime": 1.437206, 00:12:23.776 "iops": 9287.464705825052, 00:12:23.776 "mibps": 1160.9330882281315, 00:12:23.776 "io_failed": 1, 00:12:23.776 "io_timeout": 0, 00:12:23.776 "avg_latency_us": 150.62452018877818, 00:12:23.776 "min_latency_us": 34.67636363636364, 00:12:23.776 "max_latency_us": 1966.08 00:12:23.776 } 00:12:23.776 ], 00:12:23.776 "core_count": 1 00:12:23.776 } 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65330 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65330 ']' 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65330 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65330 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65330' 00:12:23.776 killing process with pid 65330 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65330 00:12:23.776 [2024-12-05 19:32:16.991303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.776 19:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65330 00:12:23.776 [2024-12-05 19:32:17.210161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cyB6cjZ4Ze 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:25.151 00:12:25.151 real 0m4.942s 00:12:25.151 user 0m6.053s 00:12:25.151 sys 0m0.630s 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.151 19:32:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.151 ************************************ 00:12:25.151 END TEST raid_read_error_test 00:12:25.151 ************************************ 00:12:25.151 19:32:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:25.152 19:32:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.152 19:32:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.152 19:32:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.152 ************************************ 00:12:25.152 START TEST raid_write_error_test 00:12:25.152 ************************************ 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NlnGWgt95f 00:12:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65476 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65476 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65476 ']' 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.152 19:32:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.410 [2024-12-05 19:32:18.647706] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:25.410 [2024-12-05 19:32:18.648130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65476 ] 00:12:25.410 [2024-12-05 19:32:18.832566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.668 [2024-12-05 19:32:18.988695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.926 [2024-12-05 19:32:19.226931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.926 [2024-12-05 19:32:19.227283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.185 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.445 BaseBdev1_malloc 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.445 true 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.445 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 [2024-12-05 19:32:19.658646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:26.446 [2024-12-05 19:32:19.658788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.446 [2024-12-05 19:32:19.658825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:26.446 [2024-12-05 19:32:19.658872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.446 [2024-12-05 19:32:19.662003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.446 [2024-12-05 19:32:19.662062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.446 BaseBdev1 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 BaseBdev2_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 true 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 [2024-12-05 19:32:19.722186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:26.446 [2024-12-05 19:32:19.722294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.446 [2024-12-05 19:32:19.722325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:26.446 [2024-12-05 19:32:19.722346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.446 [2024-12-05 19:32:19.725344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.446 [2024-12-05 19:32:19.725419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.446 BaseBdev2 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 BaseBdev3_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 true 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 [2024-12-05 19:32:19.796792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:26.446 [2024-12-05 19:32:19.796934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.446 [2024-12-05 19:32:19.796975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:26.446 [2024-12-05 19:32:19.796997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.446 [2024-12-05 19:32:19.800166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.446 [2024-12-05 19:32:19.800403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:26.446 BaseBdev3 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 [2024-12-05 19:32:19.805158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.446 [2024-12-05 19:32:19.807852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.446 [2024-12-05 19:32:19.807972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.446 [2024-12-05 19:32:19.808290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.446 [2024-12-05 19:32:19.808314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:26.446 [2024-12-05 19:32:19.808634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:26.446 [2024-12-05 19:32:19.808923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.446 [2024-12-05 19:32:19.808956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:26.446 [2024-12-05 19:32:19.809222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.446 "name": "raid_bdev1", 00:12:26.446 "uuid": "1bbb74e3-66eb-4f2f-a98b-618c056acb44", 00:12:26.446 "strip_size_kb": 64, 00:12:26.446 "state": "online", 00:12:26.446 "raid_level": "raid0", 00:12:26.446 "superblock": true, 00:12:26.446 "num_base_bdevs": 3, 00:12:26.446 "num_base_bdevs_discovered": 3, 00:12:26.446 "num_base_bdevs_operational": 3, 00:12:26.446 "base_bdevs_list": [ 00:12:26.446 { 00:12:26.446 "name": "BaseBdev1", 00:12:26.446 "uuid": "ae4819e3-8e00-505e-891c-2690ad383979", 00:12:26.446 "is_configured": true, 00:12:26.446 "data_offset": 2048, 00:12:26.446 "data_size": 63488 00:12:26.446 }, 00:12:26.446 { 00:12:26.446 "name": "BaseBdev2", 00:12:26.446 "uuid": "ca65cf1c-c569-5038-9e44-798e6a275fd0", 00:12:26.446 "is_configured": true, 00:12:26.446 "data_offset": 2048, 00:12:26.446 "data_size": 63488 00:12:26.446 }, 00:12:26.446 { 00:12:26.446 "name": "BaseBdev3", 00:12:26.446 "uuid": "72a3908e-3613-594f-b2f3-9bf5881ad99c", 00:12:26.446 "is_configured": true, 00:12:26.446 "data_offset": 2048, 00:12:26.446 "data_size": 63488 00:12:26.446 } 00:12:26.446 ] 00:12:26.446 }' 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.446 19:32:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.014 19:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:27.014 19:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.274 [2024-12-05 19:32:20.458991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.212 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.212 "name": "raid_bdev1", 00:12:28.212 "uuid": "1bbb74e3-66eb-4f2f-a98b-618c056acb44", 00:12:28.212 "strip_size_kb": 64, 00:12:28.212 "state": "online", 00:12:28.212 "raid_level": "raid0", 00:12:28.212 "superblock": true, 00:12:28.212 "num_base_bdevs": 3, 00:12:28.212 "num_base_bdevs_discovered": 3, 00:12:28.212 "num_base_bdevs_operational": 3, 00:12:28.212 "base_bdevs_list": [ 00:12:28.212 { 00:12:28.212 "name": "BaseBdev1", 00:12:28.212 "uuid": "ae4819e3-8e00-505e-891c-2690ad383979", 00:12:28.212 "is_configured": true, 00:12:28.212 "data_offset": 2048, 00:12:28.212 "data_size": 63488 00:12:28.212 }, 00:12:28.212 { 00:12:28.212 "name": "BaseBdev2", 00:12:28.212 "uuid": "ca65cf1c-c569-5038-9e44-798e6a275fd0", 00:12:28.212 "is_configured": true, 00:12:28.212 "data_offset": 2048, 00:12:28.212 "data_size": 63488 00:12:28.212 }, 00:12:28.212 { 00:12:28.212 "name": "BaseBdev3", 00:12:28.212 "uuid": "72a3908e-3613-594f-b2f3-9bf5881ad99c", 00:12:28.212 "is_configured": true, 00:12:28.212 "data_offset": 2048, 00:12:28.212 "data_size": 63488 00:12:28.212 } 00:12:28.212 ] 00:12:28.212 }' 00:12:28.213 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.213 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.472 [2024-12-05 19:32:21.845006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.472 [2024-12-05 19:32:21.845042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.472 [2024-12-05 19:32:21.848565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.472 { 00:12:28.472 "results": [ 00:12:28.472 { 00:12:28.472 "job": "raid_bdev1", 00:12:28.472 "core_mask": "0x1", 00:12:28.472 "workload": "randrw", 00:12:28.472 "percentage": 50, 00:12:28.472 "status": "finished", 00:12:28.472 "queue_depth": 1, 00:12:28.472 "io_size": 131072, 00:12:28.472 "runtime": 1.383183, 00:12:28.472 "iops": 9762.988700699763, 00:12:28.472 "mibps": 1220.3735875874704, 00:12:28.472 "io_failed": 1, 00:12:28.472 "io_timeout": 0, 00:12:28.472 "avg_latency_us": 142.74045949311702, 00:12:28.472 "min_latency_us": 30.254545454545454, 00:12:28.472 "max_latency_us": 1765.0036363636364 00:12:28.472 } 00:12:28.472 ], 00:12:28.472 "core_count": 1 00:12:28.472 } 00:12:28.472 [2024-12-05 19:32:21.848829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.472 [2024-12-05 19:32:21.848905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.472 [2024-12-05 19:32:21.848922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65476 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65476 ']' 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65476 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65476 00:12:28.472 killing process with pid 65476 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65476' 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65476 00:12:28.472 [2024-12-05 19:32:21.886767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.472 19:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65476 00:12:28.731 [2024-12-05 19:32:22.141872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NlnGWgt95f 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:30.108 00:12:30.108 real 0m4.710s 00:12:30.108 user 0m5.719s 00:12:30.108 sys 0m0.655s 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.108 ************************************ 00:12:30.108 END TEST raid_write_error_test 00:12:30.108 ************************************ 00:12:30.108 19:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.109 19:32:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:30.109 19:32:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:30.109 19:32:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.109 19:32:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.109 19:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.109 ************************************ 00:12:30.109 START TEST raid_state_function_test 00:12:30.109 ************************************ 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:30.109 Process raid pid: 65619 00:12:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65619 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65619' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65619 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65619 ']' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.109 19:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.109 [2024-12-05 19:32:23.402470] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:30.109 [2024-12-05 19:32:23.402991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.368 [2024-12-05 19:32:23.608538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.368 [2024-12-05 19:32:23.739839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.627 [2024-12-05 19:32:23.943021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.627 [2024-12-05 19:32:23.943085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.193 [2024-12-05 19:32:24.418110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.193 [2024-12-05 19:32:24.418175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.193 [2024-12-05 19:32:24.418190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.193 [2024-12-05 19:32:24.418204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.193 [2024-12-05 19:32:24.418213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.193 [2024-12-05 19:32:24.418225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.193 "name": "Existed_Raid", 00:12:31.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.193 "strip_size_kb": 64, 00:12:31.193 "state": "configuring", 00:12:31.193 "raid_level": "concat", 00:12:31.193 "superblock": false, 00:12:31.193 "num_base_bdevs": 3, 00:12:31.193 "num_base_bdevs_discovered": 0, 00:12:31.193 "num_base_bdevs_operational": 3, 00:12:31.193 "base_bdevs_list": [ 00:12:31.193 { 00:12:31.193 "name": "BaseBdev1", 00:12:31.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.193 "is_configured": false, 00:12:31.193 "data_offset": 0, 00:12:31.193 "data_size": 0 00:12:31.193 }, 00:12:31.193 { 00:12:31.193 "name": "BaseBdev2", 00:12:31.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.193 "is_configured": false, 00:12:31.193 "data_offset": 0, 00:12:31.193 "data_size": 0 00:12:31.193 }, 00:12:31.193 { 00:12:31.193 "name": "BaseBdev3", 00:12:31.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.193 "is_configured": false, 00:12:31.193 "data_offset": 0, 00:12:31.193 "data_size": 0 00:12:31.193 } 00:12:31.193 ] 00:12:31.193 }' 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.193 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 [2024-12-05 19:32:24.938179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.764 [2024-12-05 19:32:24.938385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 [2024-12-05 19:32:24.950171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.764 [2024-12-05 19:32:24.950352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.764 [2024-12-05 19:32:24.950477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.764 [2024-12-05 19:32:24.950539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.764 [2024-12-05 19:32:24.950678] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.764 [2024-12-05 19:32:24.950830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 [2024-12-05 19:32:24.999054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.764 BaseBdev1 00:12:31.764 19:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 [ 00:12:31.764 { 00:12:31.764 "name": "BaseBdev1", 00:12:31.764 "aliases": [ 00:12:31.764 "01eaee2c-ceae-40ba-baa7-c526de72ae66" 00:12:31.764 ], 00:12:31.764 "product_name": "Malloc disk", 00:12:31.764 "block_size": 512, 00:12:31.764 "num_blocks": 65536, 00:12:31.764 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:31.764 "assigned_rate_limits": { 00:12:31.764 "rw_ios_per_sec": 0, 00:12:31.764 "rw_mbytes_per_sec": 0, 00:12:31.764 "r_mbytes_per_sec": 0, 00:12:31.764 "w_mbytes_per_sec": 0 00:12:31.764 }, 00:12:31.764 "claimed": true, 00:12:31.764 "claim_type": "exclusive_write", 00:12:31.764 "zoned": false, 00:12:31.764 "supported_io_types": { 00:12:31.764 "read": true, 00:12:31.764 "write": true, 00:12:31.764 "unmap": true, 00:12:31.764 "flush": true, 00:12:31.764 "reset": true, 00:12:31.764 "nvme_admin": false, 00:12:31.764 "nvme_io": false, 00:12:31.764 "nvme_io_md": false, 00:12:31.764 "write_zeroes": true, 00:12:31.764 "zcopy": true, 00:12:31.764 "get_zone_info": false, 00:12:31.764 "zone_management": false, 00:12:31.764 "zone_append": false, 00:12:31.764 "compare": false, 00:12:31.764 "compare_and_write": false, 00:12:31.764 "abort": true, 00:12:31.764 "seek_hole": false, 00:12:31.764 "seek_data": false, 00:12:31.764 "copy": true, 00:12:31.764 "nvme_iov_md": false 00:12:31.764 }, 00:12:31.764 "memory_domains": [ 00:12:31.764 { 00:12:31.764 "dma_device_id": "system", 00:12:31.764 "dma_device_type": 1 00:12:31.764 }, 00:12:31.764 { 00:12:31.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.764 "dma_device_type": 2 00:12:31.764 } 00:12:31.764 ], 00:12:31.764 "driver_specific": {} 00:12:31.764 } 00:12:31.764 ] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.764 "name": "Existed_Raid", 00:12:31.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.764 "strip_size_kb": 64, 00:12:31.764 "state": "configuring", 00:12:31.764 "raid_level": "concat", 00:12:31.764 "superblock": false, 00:12:31.764 "num_base_bdevs": 3, 00:12:31.764 "num_base_bdevs_discovered": 1, 00:12:31.764 "num_base_bdevs_operational": 3, 00:12:31.764 "base_bdevs_list": [ 00:12:31.764 { 00:12:31.764 "name": "BaseBdev1", 00:12:31.764 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:31.764 "is_configured": true, 00:12:31.764 "data_offset": 0, 00:12:31.764 "data_size": 65536 00:12:31.764 }, 00:12:31.764 { 00:12:31.764 "name": "BaseBdev2", 00:12:31.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.764 "is_configured": false, 00:12:31.764 "data_offset": 0, 00:12:31.764 "data_size": 0 00:12:31.764 }, 00:12:31.764 { 00:12:31.764 "name": "BaseBdev3", 00:12:31.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.764 "is_configured": false, 00:12:31.764 "data_offset": 0, 00:12:31.764 "data_size": 0 00:12:31.764 } 00:12:31.764 ] 00:12:31.764 }' 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.764 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.329 [2024-12-05 19:32:25.555308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.329 [2024-12-05 19:32:25.555388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.329 [2024-12-05 19:32:25.563355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.329 [2024-12-05 19:32:25.566017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.329 [2024-12-05 19:32:25.566218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.329 [2024-12-05 19:32:25.566346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.329 [2024-12-05 19:32:25.566479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.329 "name": "Existed_Raid", 00:12:32.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.329 "strip_size_kb": 64, 00:12:32.329 "state": "configuring", 00:12:32.329 "raid_level": "concat", 00:12:32.329 "superblock": false, 00:12:32.329 "num_base_bdevs": 3, 00:12:32.329 "num_base_bdevs_discovered": 1, 00:12:32.329 "num_base_bdevs_operational": 3, 00:12:32.329 "base_bdevs_list": [ 00:12:32.329 { 00:12:32.329 "name": "BaseBdev1", 00:12:32.329 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:32.329 "is_configured": true, 00:12:32.329 "data_offset": 0, 00:12:32.329 "data_size": 65536 00:12:32.329 }, 00:12:32.329 { 00:12:32.329 "name": "BaseBdev2", 00:12:32.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.329 "is_configured": false, 00:12:32.329 "data_offset": 0, 00:12:32.329 "data_size": 0 00:12:32.329 }, 00:12:32.329 { 00:12:32.329 "name": "BaseBdev3", 00:12:32.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.329 "is_configured": false, 00:12:32.329 "data_offset": 0, 00:12:32.329 "data_size": 0 00:12:32.329 } 00:12:32.329 ] 00:12:32.329 }' 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.329 19:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 [2024-12-05 19:32:26.138190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.894 BaseBdev2 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 [ 00:12:32.894 { 00:12:32.894 "name": "BaseBdev2", 00:12:32.894 "aliases": [ 00:12:32.894 "33c13cad-cf83-492f-8bca-035fa40d345e" 00:12:32.894 ], 00:12:32.894 "product_name": "Malloc disk", 00:12:32.894 "block_size": 512, 00:12:32.894 "num_blocks": 65536, 00:12:32.894 "uuid": "33c13cad-cf83-492f-8bca-035fa40d345e", 00:12:32.894 "assigned_rate_limits": { 00:12:32.894 "rw_ios_per_sec": 0, 00:12:32.894 "rw_mbytes_per_sec": 0, 00:12:32.894 "r_mbytes_per_sec": 0, 00:12:32.894 "w_mbytes_per_sec": 0 00:12:32.894 }, 00:12:32.894 "claimed": true, 00:12:32.894 "claim_type": "exclusive_write", 00:12:32.894 "zoned": false, 00:12:32.894 "supported_io_types": { 00:12:32.894 "read": true, 00:12:32.894 "write": true, 00:12:32.894 "unmap": true, 00:12:32.894 "flush": true, 00:12:32.894 "reset": true, 00:12:32.894 "nvme_admin": false, 00:12:32.894 "nvme_io": false, 00:12:32.894 "nvme_io_md": false, 00:12:32.894 "write_zeroes": true, 00:12:32.894 "zcopy": true, 00:12:32.894 "get_zone_info": false, 00:12:32.894 "zone_management": false, 00:12:32.894 "zone_append": false, 00:12:32.894 "compare": false, 00:12:32.894 "compare_and_write": false, 00:12:32.894 "abort": true, 00:12:32.894 "seek_hole": false, 00:12:32.894 "seek_data": false, 00:12:32.894 "copy": true, 00:12:32.894 "nvme_iov_md": false 00:12:32.894 }, 00:12:32.894 "memory_domains": [ 00:12:32.894 { 00:12:32.894 "dma_device_id": "system", 00:12:32.894 "dma_device_type": 1 00:12:32.894 }, 00:12:32.894 { 00:12:32.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.894 "dma_device_type": 2 00:12:32.894 } 00:12:32.894 ], 00:12:32.894 "driver_specific": {} 00:12:32.894 } 00:12:32.894 ] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.894 "name": "Existed_Raid", 00:12:32.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.894 "strip_size_kb": 64, 00:12:32.894 "state": "configuring", 00:12:32.894 "raid_level": "concat", 00:12:32.894 "superblock": false, 00:12:32.894 "num_base_bdevs": 3, 00:12:32.894 "num_base_bdevs_discovered": 2, 00:12:32.894 "num_base_bdevs_operational": 3, 00:12:32.894 "base_bdevs_list": [ 00:12:32.894 { 00:12:32.894 "name": "BaseBdev1", 00:12:32.894 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:32.894 "is_configured": true, 00:12:32.894 "data_offset": 0, 00:12:32.894 "data_size": 65536 00:12:32.894 }, 00:12:32.894 { 00:12:32.894 "name": "BaseBdev2", 00:12:32.894 "uuid": "33c13cad-cf83-492f-8bca-035fa40d345e", 00:12:32.894 "is_configured": true, 00:12:32.894 "data_offset": 0, 00:12:32.894 "data_size": 65536 00:12:32.894 }, 00:12:32.894 { 00:12:32.894 "name": "BaseBdev3", 00:12:32.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.894 "is_configured": false, 00:12:32.894 "data_offset": 0, 00:12:32.894 "data_size": 0 00:12:32.894 } 00:12:32.894 ] 00:12:32.894 }' 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.894 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.459 [2024-12-05 19:32:26.764980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.459 [2024-12-05 19:32:26.765052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.459 [2024-12-05 19:32:26.765071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:33.459 [2024-12-05 19:32:26.765456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.459 [2024-12-05 19:32:26.765731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.459 [2024-12-05 19:32:26.765748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.459 BaseBdev3 00:12:33.459 [2024-12-05 19:32:26.766109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.459 [ 00:12:33.459 { 00:12:33.459 "name": "BaseBdev3", 00:12:33.459 "aliases": [ 00:12:33.459 "f6235a4f-6187-4421-85ce-fbd7f734ea28" 00:12:33.459 ], 00:12:33.459 "product_name": "Malloc disk", 00:12:33.459 "block_size": 512, 00:12:33.459 "num_blocks": 65536, 00:12:33.459 "uuid": "f6235a4f-6187-4421-85ce-fbd7f734ea28", 00:12:33.459 "assigned_rate_limits": { 00:12:33.459 "rw_ios_per_sec": 0, 00:12:33.459 "rw_mbytes_per_sec": 0, 00:12:33.459 "r_mbytes_per_sec": 0, 00:12:33.459 "w_mbytes_per_sec": 0 00:12:33.459 }, 00:12:33.459 "claimed": true, 00:12:33.459 "claim_type": "exclusive_write", 00:12:33.459 "zoned": false, 00:12:33.459 "supported_io_types": { 00:12:33.459 "read": true, 00:12:33.459 "write": true, 00:12:33.459 "unmap": true, 00:12:33.459 "flush": true, 00:12:33.459 "reset": true, 00:12:33.459 "nvme_admin": false, 00:12:33.459 "nvme_io": false, 00:12:33.459 "nvme_io_md": false, 00:12:33.459 "write_zeroes": true, 00:12:33.459 "zcopy": true, 00:12:33.459 "get_zone_info": false, 00:12:33.459 "zone_management": false, 00:12:33.459 "zone_append": false, 00:12:33.459 "compare": false, 00:12:33.459 "compare_and_write": false, 00:12:33.459 "abort": true, 00:12:33.459 "seek_hole": false, 00:12:33.459 "seek_data": false, 00:12:33.459 "copy": true, 00:12:33.459 "nvme_iov_md": false 00:12:33.459 }, 00:12:33.459 "memory_domains": [ 00:12:33.459 { 00:12:33.459 "dma_device_id": "system", 00:12:33.459 "dma_device_type": 1 00:12:33.459 }, 00:12:33.459 { 00:12:33.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.459 "dma_device_type": 2 00:12:33.459 } 00:12:33.459 ], 00:12:33.459 "driver_specific": {} 00:12:33.459 } 00:12:33.459 ] 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.459 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.460 "name": "Existed_Raid", 00:12:33.460 "uuid": "0012034d-5512-4cbf-ace6-ff20442c6b06", 00:12:33.460 "strip_size_kb": 64, 00:12:33.460 "state": "online", 00:12:33.460 "raid_level": "concat", 00:12:33.460 "superblock": false, 00:12:33.460 "num_base_bdevs": 3, 00:12:33.460 "num_base_bdevs_discovered": 3, 00:12:33.460 "num_base_bdevs_operational": 3, 00:12:33.460 "base_bdevs_list": [ 00:12:33.460 { 00:12:33.460 "name": "BaseBdev1", 00:12:33.460 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:33.460 "is_configured": true, 00:12:33.460 "data_offset": 0, 00:12:33.460 "data_size": 65536 00:12:33.460 }, 00:12:33.460 { 00:12:33.460 "name": "BaseBdev2", 00:12:33.460 "uuid": "33c13cad-cf83-492f-8bca-035fa40d345e", 00:12:33.460 "is_configured": true, 00:12:33.460 "data_offset": 0, 00:12:33.460 "data_size": 65536 00:12:33.460 }, 00:12:33.460 { 00:12:33.460 "name": "BaseBdev3", 00:12:33.460 "uuid": "f6235a4f-6187-4421-85ce-fbd7f734ea28", 00:12:33.460 "is_configured": true, 00:12:33.460 "data_offset": 0, 00:12:33.460 "data_size": 65536 00:12:33.460 } 00:12:33.460 ] 00:12:33.460 }' 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.460 19:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.026 [2024-12-05 19:32:27.317587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.026 "name": "Existed_Raid", 00:12:34.026 "aliases": [ 00:12:34.026 "0012034d-5512-4cbf-ace6-ff20442c6b06" 00:12:34.026 ], 00:12:34.026 "product_name": "Raid Volume", 00:12:34.026 "block_size": 512, 00:12:34.026 "num_blocks": 196608, 00:12:34.026 "uuid": "0012034d-5512-4cbf-ace6-ff20442c6b06", 00:12:34.026 "assigned_rate_limits": { 00:12:34.026 "rw_ios_per_sec": 0, 00:12:34.026 "rw_mbytes_per_sec": 0, 00:12:34.026 "r_mbytes_per_sec": 0, 00:12:34.026 "w_mbytes_per_sec": 0 00:12:34.026 }, 00:12:34.026 "claimed": false, 00:12:34.026 "zoned": false, 00:12:34.026 "supported_io_types": { 00:12:34.026 "read": true, 00:12:34.026 "write": true, 00:12:34.026 "unmap": true, 00:12:34.026 "flush": true, 00:12:34.026 "reset": true, 00:12:34.026 "nvme_admin": false, 00:12:34.026 "nvme_io": false, 00:12:34.026 "nvme_io_md": false, 00:12:34.026 "write_zeroes": true, 00:12:34.026 "zcopy": false, 00:12:34.026 "get_zone_info": false, 00:12:34.026 "zone_management": false, 00:12:34.026 "zone_append": false, 00:12:34.026 "compare": false, 00:12:34.026 "compare_and_write": false, 00:12:34.026 "abort": false, 00:12:34.026 "seek_hole": false, 00:12:34.026 "seek_data": false, 00:12:34.026 "copy": false, 00:12:34.026 "nvme_iov_md": false 00:12:34.026 }, 00:12:34.026 "memory_domains": [ 00:12:34.026 { 00:12:34.026 "dma_device_id": "system", 00:12:34.026 "dma_device_type": 1 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.026 "dma_device_type": 2 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "dma_device_id": "system", 00:12:34.026 "dma_device_type": 1 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.026 "dma_device_type": 2 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "dma_device_id": "system", 00:12:34.026 "dma_device_type": 1 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.026 "dma_device_type": 2 00:12:34.026 } 00:12:34.026 ], 00:12:34.026 "driver_specific": { 00:12:34.026 "raid": { 00:12:34.026 "uuid": "0012034d-5512-4cbf-ace6-ff20442c6b06", 00:12:34.026 "strip_size_kb": 64, 00:12:34.026 "state": "online", 00:12:34.026 "raid_level": "concat", 00:12:34.026 "superblock": false, 00:12:34.026 "num_base_bdevs": 3, 00:12:34.026 "num_base_bdevs_discovered": 3, 00:12:34.026 "num_base_bdevs_operational": 3, 00:12:34.026 "base_bdevs_list": [ 00:12:34.026 { 00:12:34.026 "name": "BaseBdev1", 00:12:34.026 "uuid": "01eaee2c-ceae-40ba-baa7-c526de72ae66", 00:12:34.026 "is_configured": true, 00:12:34.026 "data_offset": 0, 00:12:34.026 "data_size": 65536 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "name": "BaseBdev2", 00:12:34.026 "uuid": "33c13cad-cf83-492f-8bca-035fa40d345e", 00:12:34.026 "is_configured": true, 00:12:34.026 "data_offset": 0, 00:12:34.026 "data_size": 65536 00:12:34.026 }, 00:12:34.026 { 00:12:34.026 "name": "BaseBdev3", 00:12:34.026 "uuid": "f6235a4f-6187-4421-85ce-fbd7f734ea28", 00:12:34.026 "is_configured": true, 00:12:34.026 "data_offset": 0, 00:12:34.026 "data_size": 65536 00:12:34.026 } 00:12:34.026 ] 00:12:34.026 } 00:12:34.026 } 00:12:34.026 }' 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.026 BaseBdev2 00:12:34.026 BaseBdev3' 00:12:34.026 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.284 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.284 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.284 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.285 [2024-12-05 19:32:27.625366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.285 [2024-12-05 19:32:27.625402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.285 [2024-12-05 19:32:27.625475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.285 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.544 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.544 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.544 "name": "Existed_Raid", 00:12:34.544 "uuid": "0012034d-5512-4cbf-ace6-ff20442c6b06", 00:12:34.544 "strip_size_kb": 64, 00:12:34.544 "state": "offline", 00:12:34.544 "raid_level": "concat", 00:12:34.544 "superblock": false, 00:12:34.544 "num_base_bdevs": 3, 00:12:34.544 "num_base_bdevs_discovered": 2, 00:12:34.544 "num_base_bdevs_operational": 2, 00:12:34.544 "base_bdevs_list": [ 00:12:34.544 { 00:12:34.544 "name": null, 00:12:34.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.544 "is_configured": false, 00:12:34.544 "data_offset": 0, 00:12:34.544 "data_size": 65536 00:12:34.544 }, 00:12:34.544 { 00:12:34.544 "name": "BaseBdev2", 00:12:34.544 "uuid": "33c13cad-cf83-492f-8bca-035fa40d345e", 00:12:34.544 "is_configured": true, 00:12:34.544 "data_offset": 0, 00:12:34.544 "data_size": 65536 00:12:34.544 }, 00:12:34.544 { 00:12:34.544 "name": "BaseBdev3", 00:12:34.544 "uuid": "f6235a4f-6187-4421-85ce-fbd7f734ea28", 00:12:34.544 "is_configured": true, 00:12:34.544 "data_offset": 0, 00:12:34.544 "data_size": 65536 00:12:34.544 } 00:12:34.544 ] 00:12:34.544 }' 00:12:34.544 19:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.544 19:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.803 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.062 [2024-12-05 19:32:28.285390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.062 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.062 [2024-12-05 19:32:28.427556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.062 [2024-12-05 19:32:28.427783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.321 BaseBdev2 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.321 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 [ 00:12:35.322 { 00:12:35.322 "name": "BaseBdev2", 00:12:35.322 "aliases": [ 00:12:35.322 "6b5ae013-05cb-4c9b-875e-fe228a763f93" 00:12:35.322 ], 00:12:35.322 "product_name": "Malloc disk", 00:12:35.322 "block_size": 512, 00:12:35.322 "num_blocks": 65536, 00:12:35.322 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:35.322 "assigned_rate_limits": { 00:12:35.322 "rw_ios_per_sec": 0, 00:12:35.322 "rw_mbytes_per_sec": 0, 00:12:35.322 "r_mbytes_per_sec": 0, 00:12:35.322 "w_mbytes_per_sec": 0 00:12:35.322 }, 00:12:35.322 "claimed": false, 00:12:35.322 "zoned": false, 00:12:35.322 "supported_io_types": { 00:12:35.322 "read": true, 00:12:35.322 "write": true, 00:12:35.322 "unmap": true, 00:12:35.322 "flush": true, 00:12:35.322 "reset": true, 00:12:35.322 "nvme_admin": false, 00:12:35.322 "nvme_io": false, 00:12:35.322 "nvme_io_md": false, 00:12:35.322 "write_zeroes": true, 00:12:35.322 "zcopy": true, 00:12:35.322 "get_zone_info": false, 00:12:35.322 "zone_management": false, 00:12:35.322 "zone_append": false, 00:12:35.322 "compare": false, 00:12:35.322 "compare_and_write": false, 00:12:35.322 "abort": true, 00:12:35.322 "seek_hole": false, 00:12:35.322 "seek_data": false, 00:12:35.322 "copy": true, 00:12:35.322 "nvme_iov_md": false 00:12:35.322 }, 00:12:35.322 "memory_domains": [ 00:12:35.322 { 00:12:35.322 "dma_device_id": "system", 00:12:35.322 "dma_device_type": 1 00:12:35.322 }, 00:12:35.322 { 00:12:35.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.322 "dma_device_type": 2 00:12:35.322 } 00:12:35.322 ], 00:12:35.322 "driver_specific": {} 00:12:35.322 } 00:12:35.322 ] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 BaseBdev3 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 [ 00:12:35.322 { 00:12:35.322 "name": "BaseBdev3", 00:12:35.322 "aliases": [ 00:12:35.322 "a8196a54-9f04-4550-a3e4-b2b7fa11cd92" 00:12:35.322 ], 00:12:35.322 "product_name": "Malloc disk", 00:12:35.322 "block_size": 512, 00:12:35.322 "num_blocks": 65536, 00:12:35.322 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:35.322 "assigned_rate_limits": { 00:12:35.322 "rw_ios_per_sec": 0, 00:12:35.322 "rw_mbytes_per_sec": 0, 00:12:35.322 "r_mbytes_per_sec": 0, 00:12:35.322 "w_mbytes_per_sec": 0 00:12:35.322 }, 00:12:35.322 "claimed": false, 00:12:35.322 "zoned": false, 00:12:35.322 "supported_io_types": { 00:12:35.322 "read": true, 00:12:35.322 "write": true, 00:12:35.322 "unmap": true, 00:12:35.322 "flush": true, 00:12:35.322 "reset": true, 00:12:35.322 "nvme_admin": false, 00:12:35.322 "nvme_io": false, 00:12:35.322 "nvme_io_md": false, 00:12:35.322 "write_zeroes": true, 00:12:35.322 "zcopy": true, 00:12:35.322 "get_zone_info": false, 00:12:35.322 "zone_management": false, 00:12:35.322 "zone_append": false, 00:12:35.322 "compare": false, 00:12:35.322 "compare_and_write": false, 00:12:35.322 "abort": true, 00:12:35.322 "seek_hole": false, 00:12:35.322 "seek_data": false, 00:12:35.322 "copy": true, 00:12:35.322 "nvme_iov_md": false 00:12:35.322 }, 00:12:35.322 "memory_domains": [ 00:12:35.322 { 00:12:35.322 "dma_device_id": "system", 00:12:35.322 "dma_device_type": 1 00:12:35.322 }, 00:12:35.322 { 00:12:35.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.322 "dma_device_type": 2 00:12:35.322 } 00:12:35.322 ], 00:12:35.322 "driver_specific": {} 00:12:35.322 } 00:12:35.322 ] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.322 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.322 [2024-12-05 19:32:28.724251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.323 [2024-12-05 19:32:28.724305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.323 [2024-12-05 19:32:28.724371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.323 [2024-12-05 19:32:28.726827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.323 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.583 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.583 "name": "Existed_Raid", 00:12:35.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.583 "strip_size_kb": 64, 00:12:35.583 "state": "configuring", 00:12:35.583 "raid_level": "concat", 00:12:35.583 "superblock": false, 00:12:35.583 "num_base_bdevs": 3, 00:12:35.583 "num_base_bdevs_discovered": 2, 00:12:35.583 "num_base_bdevs_operational": 3, 00:12:35.583 "base_bdevs_list": [ 00:12:35.583 { 00:12:35.583 "name": "BaseBdev1", 00:12:35.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.583 "is_configured": false, 00:12:35.583 "data_offset": 0, 00:12:35.583 "data_size": 0 00:12:35.583 }, 00:12:35.583 { 00:12:35.583 "name": "BaseBdev2", 00:12:35.583 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:35.583 "is_configured": true, 00:12:35.583 "data_offset": 0, 00:12:35.583 "data_size": 65536 00:12:35.583 }, 00:12:35.583 { 00:12:35.583 "name": "BaseBdev3", 00:12:35.583 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:35.583 "is_configured": true, 00:12:35.583 "data_offset": 0, 00:12:35.583 "data_size": 65536 00:12:35.583 } 00:12:35.583 ] 00:12:35.583 }' 00:12:35.583 19:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.583 19:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.842 [2024-12-05 19:32:29.236466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.842 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.100 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.100 "name": "Existed_Raid", 00:12:36.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.100 "strip_size_kb": 64, 00:12:36.100 "state": "configuring", 00:12:36.100 "raid_level": "concat", 00:12:36.100 "superblock": false, 00:12:36.100 "num_base_bdevs": 3, 00:12:36.100 "num_base_bdevs_discovered": 1, 00:12:36.100 "num_base_bdevs_operational": 3, 00:12:36.100 "base_bdevs_list": [ 00:12:36.100 { 00:12:36.100 "name": "BaseBdev1", 00:12:36.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.100 "is_configured": false, 00:12:36.100 "data_offset": 0, 00:12:36.100 "data_size": 0 00:12:36.100 }, 00:12:36.100 { 00:12:36.100 "name": null, 00:12:36.100 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:36.100 "is_configured": false, 00:12:36.100 "data_offset": 0, 00:12:36.100 "data_size": 65536 00:12:36.100 }, 00:12:36.100 { 00:12:36.100 "name": "BaseBdev3", 00:12:36.100 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:36.100 "is_configured": true, 00:12:36.100 "data_offset": 0, 00:12:36.100 "data_size": 65536 00:12:36.100 } 00:12:36.100 ] 00:12:36.100 }' 00:12:36.100 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.100 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.359 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.617 [2024-12-05 19:32:29.830513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.617 BaseBdev1 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.617 [ 00:12:36.617 { 00:12:36.617 "name": "BaseBdev1", 00:12:36.617 "aliases": [ 00:12:36.617 "14b7a8ab-4a21-4f27-8a51-254279a4974a" 00:12:36.617 ], 00:12:36.617 "product_name": "Malloc disk", 00:12:36.617 "block_size": 512, 00:12:36.617 "num_blocks": 65536, 00:12:36.617 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:36.617 "assigned_rate_limits": { 00:12:36.617 "rw_ios_per_sec": 0, 00:12:36.617 "rw_mbytes_per_sec": 0, 00:12:36.617 "r_mbytes_per_sec": 0, 00:12:36.617 "w_mbytes_per_sec": 0 00:12:36.617 }, 00:12:36.617 "claimed": true, 00:12:36.617 "claim_type": "exclusive_write", 00:12:36.617 "zoned": false, 00:12:36.617 "supported_io_types": { 00:12:36.617 "read": true, 00:12:36.617 "write": true, 00:12:36.617 "unmap": true, 00:12:36.617 "flush": true, 00:12:36.617 "reset": true, 00:12:36.617 "nvme_admin": false, 00:12:36.617 "nvme_io": false, 00:12:36.617 "nvme_io_md": false, 00:12:36.617 "write_zeroes": true, 00:12:36.617 "zcopy": true, 00:12:36.617 "get_zone_info": false, 00:12:36.617 "zone_management": false, 00:12:36.617 "zone_append": false, 00:12:36.617 "compare": false, 00:12:36.617 "compare_and_write": false, 00:12:36.617 "abort": true, 00:12:36.617 "seek_hole": false, 00:12:36.617 "seek_data": false, 00:12:36.617 "copy": true, 00:12:36.617 "nvme_iov_md": false 00:12:36.617 }, 00:12:36.617 "memory_domains": [ 00:12:36.617 { 00:12:36.617 "dma_device_id": "system", 00:12:36.617 "dma_device_type": 1 00:12:36.617 }, 00:12:36.617 { 00:12:36.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.617 "dma_device_type": 2 00:12:36.617 } 00:12:36.617 ], 00:12:36.617 "driver_specific": {} 00:12:36.617 } 00:12:36.617 ] 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.617 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.618 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.618 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.618 "name": "Existed_Raid", 00:12:36.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.618 "strip_size_kb": 64, 00:12:36.618 "state": "configuring", 00:12:36.618 "raid_level": "concat", 00:12:36.618 "superblock": false, 00:12:36.618 "num_base_bdevs": 3, 00:12:36.618 "num_base_bdevs_discovered": 2, 00:12:36.618 "num_base_bdevs_operational": 3, 00:12:36.618 "base_bdevs_list": [ 00:12:36.618 { 00:12:36.618 "name": "BaseBdev1", 00:12:36.618 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:36.618 "is_configured": true, 00:12:36.618 "data_offset": 0, 00:12:36.618 "data_size": 65536 00:12:36.618 }, 00:12:36.618 { 00:12:36.618 "name": null, 00:12:36.618 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:36.618 "is_configured": false, 00:12:36.618 "data_offset": 0, 00:12:36.618 "data_size": 65536 00:12:36.618 }, 00:12:36.618 { 00:12:36.618 "name": "BaseBdev3", 00:12:36.618 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:36.618 "is_configured": true, 00:12:36.618 "data_offset": 0, 00:12:36.618 "data_size": 65536 00:12:36.618 } 00:12:36.618 ] 00:12:36.618 }' 00:12:36.618 19:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.618 19:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.185 [2024-12-05 19:32:30.430777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.185 "name": "Existed_Raid", 00:12:37.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.185 "strip_size_kb": 64, 00:12:37.185 "state": "configuring", 00:12:37.185 "raid_level": "concat", 00:12:37.185 "superblock": false, 00:12:37.185 "num_base_bdevs": 3, 00:12:37.185 "num_base_bdevs_discovered": 1, 00:12:37.185 "num_base_bdevs_operational": 3, 00:12:37.185 "base_bdevs_list": [ 00:12:37.185 { 00:12:37.185 "name": "BaseBdev1", 00:12:37.185 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:37.185 "is_configured": true, 00:12:37.185 "data_offset": 0, 00:12:37.185 "data_size": 65536 00:12:37.185 }, 00:12:37.185 { 00:12:37.185 "name": null, 00:12:37.185 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:37.185 "is_configured": false, 00:12:37.185 "data_offset": 0, 00:12:37.185 "data_size": 65536 00:12:37.185 }, 00:12:37.185 { 00:12:37.185 "name": null, 00:12:37.185 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:37.185 "is_configured": false, 00:12:37.185 "data_offset": 0, 00:12:37.185 "data_size": 65536 00:12:37.185 } 00:12:37.185 ] 00:12:37.185 }' 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.185 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:37.753 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.754 [2024-12-05 19:32:30.994930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.754 19:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.754 "name": "Existed_Raid", 00:12:37.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.754 "strip_size_kb": 64, 00:12:37.754 "state": "configuring", 00:12:37.754 "raid_level": "concat", 00:12:37.754 "superblock": false, 00:12:37.754 "num_base_bdevs": 3, 00:12:37.754 "num_base_bdevs_discovered": 2, 00:12:37.754 "num_base_bdevs_operational": 3, 00:12:37.754 "base_bdevs_list": [ 00:12:37.754 { 00:12:37.754 "name": "BaseBdev1", 00:12:37.754 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:37.754 "is_configured": true, 00:12:37.754 "data_offset": 0, 00:12:37.754 "data_size": 65536 00:12:37.754 }, 00:12:37.754 { 00:12:37.754 "name": null, 00:12:37.754 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:37.754 "is_configured": false, 00:12:37.754 "data_offset": 0, 00:12:37.754 "data_size": 65536 00:12:37.754 }, 00:12:37.754 { 00:12:37.754 "name": "BaseBdev3", 00:12:37.754 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:37.754 "is_configured": true, 00:12:37.754 "data_offset": 0, 00:12:37.754 "data_size": 65536 00:12:37.754 } 00:12:37.754 ] 00:12:37.754 }' 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.754 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.355 [2024-12-05 19:32:31.559116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.355 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.355 "name": "Existed_Raid", 00:12:38.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.355 "strip_size_kb": 64, 00:12:38.355 "state": "configuring", 00:12:38.355 "raid_level": "concat", 00:12:38.355 "superblock": false, 00:12:38.355 "num_base_bdevs": 3, 00:12:38.355 "num_base_bdevs_discovered": 1, 00:12:38.355 "num_base_bdevs_operational": 3, 00:12:38.355 "base_bdevs_list": [ 00:12:38.355 { 00:12:38.355 "name": null, 00:12:38.355 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:38.355 "is_configured": false, 00:12:38.355 "data_offset": 0, 00:12:38.355 "data_size": 65536 00:12:38.355 }, 00:12:38.355 { 00:12:38.355 "name": null, 00:12:38.355 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:38.356 "is_configured": false, 00:12:38.356 "data_offset": 0, 00:12:38.356 "data_size": 65536 00:12:38.356 }, 00:12:38.356 { 00:12:38.356 "name": "BaseBdev3", 00:12:38.356 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:38.356 "is_configured": true, 00:12:38.356 "data_offset": 0, 00:12:38.356 "data_size": 65536 00:12:38.356 } 00:12:38.356 ] 00:12:38.356 }' 00:12:38.356 19:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.356 19:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 [2024-12-05 19:32:32.236543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.923 "name": "Existed_Raid", 00:12:38.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.923 "strip_size_kb": 64, 00:12:38.923 "state": "configuring", 00:12:38.923 "raid_level": "concat", 00:12:38.923 "superblock": false, 00:12:38.923 "num_base_bdevs": 3, 00:12:38.923 "num_base_bdevs_discovered": 2, 00:12:38.923 "num_base_bdevs_operational": 3, 00:12:38.923 "base_bdevs_list": [ 00:12:38.923 { 00:12:38.923 "name": null, 00:12:38.923 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:38.923 "is_configured": false, 00:12:38.923 "data_offset": 0, 00:12:38.923 "data_size": 65536 00:12:38.923 }, 00:12:38.923 { 00:12:38.923 "name": "BaseBdev2", 00:12:38.923 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:38.923 "is_configured": true, 00:12:38.923 "data_offset": 0, 00:12:38.923 "data_size": 65536 00:12:38.923 }, 00:12:38.923 { 00:12:38.923 "name": "BaseBdev3", 00:12:38.923 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:38.923 "is_configured": true, 00:12:38.923 "data_offset": 0, 00:12:38.923 "data_size": 65536 00:12:38.923 } 00:12:38.923 ] 00:12:38.923 }' 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.923 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 14b7a8ab-4a21-4f27-8a51-254279a4974a 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 [2024-12-05 19:32:32.899137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:39.490 [2024-12-05 19:32:32.899191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.490 [2024-12-05 19:32:32.899206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:39.490 [2024-12-05 19:32:32.899525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.490 [2024-12-05 19:32:32.899760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.490 [2024-12-05 19:32:32.899778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:39.490 [2024-12-05 19:32:32.900089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.490 NewBaseBdev 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.490 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.490 [ 00:12:39.490 { 00:12:39.490 "name": "NewBaseBdev", 00:12:39.490 "aliases": [ 00:12:39.490 "14b7a8ab-4a21-4f27-8a51-254279a4974a" 00:12:39.490 ], 00:12:39.490 "product_name": "Malloc disk", 00:12:39.490 "block_size": 512, 00:12:39.490 "num_blocks": 65536, 00:12:39.490 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:39.490 "assigned_rate_limits": { 00:12:39.490 "rw_ios_per_sec": 0, 00:12:39.490 "rw_mbytes_per_sec": 0, 00:12:39.749 "r_mbytes_per_sec": 0, 00:12:39.749 "w_mbytes_per_sec": 0 00:12:39.749 }, 00:12:39.749 "claimed": true, 00:12:39.749 "claim_type": "exclusive_write", 00:12:39.749 "zoned": false, 00:12:39.749 "supported_io_types": { 00:12:39.750 "read": true, 00:12:39.750 "write": true, 00:12:39.750 "unmap": true, 00:12:39.750 "flush": true, 00:12:39.750 "reset": true, 00:12:39.750 "nvme_admin": false, 00:12:39.750 "nvme_io": false, 00:12:39.750 "nvme_io_md": false, 00:12:39.750 "write_zeroes": true, 00:12:39.750 "zcopy": true, 00:12:39.750 "get_zone_info": false, 00:12:39.750 "zone_management": false, 00:12:39.750 "zone_append": false, 00:12:39.750 "compare": false, 00:12:39.750 "compare_and_write": false, 00:12:39.750 "abort": true, 00:12:39.750 "seek_hole": false, 00:12:39.750 "seek_data": false, 00:12:39.750 "copy": true, 00:12:39.750 "nvme_iov_md": false 00:12:39.750 }, 00:12:39.750 "memory_domains": [ 00:12:39.750 { 00:12:39.750 "dma_device_id": "system", 00:12:39.750 "dma_device_type": 1 00:12:39.750 }, 00:12:39.750 { 00:12:39.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.750 "dma_device_type": 2 00:12:39.750 } 00:12:39.750 ], 00:12:39.750 "driver_specific": {} 00:12:39.750 } 00:12:39.750 ] 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.750 "name": "Existed_Raid", 00:12:39.750 "uuid": "7f5ad0eb-c8fd-43ba-8487-4e98bbfd7927", 00:12:39.750 "strip_size_kb": 64, 00:12:39.750 "state": "online", 00:12:39.750 "raid_level": "concat", 00:12:39.750 "superblock": false, 00:12:39.750 "num_base_bdevs": 3, 00:12:39.750 "num_base_bdevs_discovered": 3, 00:12:39.750 "num_base_bdevs_operational": 3, 00:12:39.750 "base_bdevs_list": [ 00:12:39.750 { 00:12:39.750 "name": "NewBaseBdev", 00:12:39.750 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:39.750 "is_configured": true, 00:12:39.750 "data_offset": 0, 00:12:39.750 "data_size": 65536 00:12:39.750 }, 00:12:39.750 { 00:12:39.750 "name": "BaseBdev2", 00:12:39.750 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:39.750 "is_configured": true, 00:12:39.750 "data_offset": 0, 00:12:39.750 "data_size": 65536 00:12:39.750 }, 00:12:39.750 { 00:12:39.750 "name": "BaseBdev3", 00:12:39.750 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:39.750 "is_configured": true, 00:12:39.750 "data_offset": 0, 00:12:39.750 "data_size": 65536 00:12:39.750 } 00:12:39.750 ] 00:12:39.750 }' 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.750 19:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.008 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.267 [2024-12-05 19:32:33.451747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.267 "name": "Existed_Raid", 00:12:40.267 "aliases": [ 00:12:40.267 "7f5ad0eb-c8fd-43ba-8487-4e98bbfd7927" 00:12:40.267 ], 00:12:40.267 "product_name": "Raid Volume", 00:12:40.267 "block_size": 512, 00:12:40.267 "num_blocks": 196608, 00:12:40.267 "uuid": "7f5ad0eb-c8fd-43ba-8487-4e98bbfd7927", 00:12:40.267 "assigned_rate_limits": { 00:12:40.267 "rw_ios_per_sec": 0, 00:12:40.267 "rw_mbytes_per_sec": 0, 00:12:40.267 "r_mbytes_per_sec": 0, 00:12:40.267 "w_mbytes_per_sec": 0 00:12:40.267 }, 00:12:40.267 "claimed": false, 00:12:40.267 "zoned": false, 00:12:40.267 "supported_io_types": { 00:12:40.267 "read": true, 00:12:40.267 "write": true, 00:12:40.267 "unmap": true, 00:12:40.267 "flush": true, 00:12:40.267 "reset": true, 00:12:40.267 "nvme_admin": false, 00:12:40.267 "nvme_io": false, 00:12:40.267 "nvme_io_md": false, 00:12:40.267 "write_zeroes": true, 00:12:40.267 "zcopy": false, 00:12:40.267 "get_zone_info": false, 00:12:40.267 "zone_management": false, 00:12:40.267 "zone_append": false, 00:12:40.267 "compare": false, 00:12:40.267 "compare_and_write": false, 00:12:40.267 "abort": false, 00:12:40.267 "seek_hole": false, 00:12:40.267 "seek_data": false, 00:12:40.267 "copy": false, 00:12:40.267 "nvme_iov_md": false 00:12:40.267 }, 00:12:40.267 "memory_domains": [ 00:12:40.267 { 00:12:40.267 "dma_device_id": "system", 00:12:40.267 "dma_device_type": 1 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.267 "dma_device_type": 2 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "dma_device_id": "system", 00:12:40.267 "dma_device_type": 1 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.267 "dma_device_type": 2 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "dma_device_id": "system", 00:12:40.267 "dma_device_type": 1 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.267 "dma_device_type": 2 00:12:40.267 } 00:12:40.267 ], 00:12:40.267 "driver_specific": { 00:12:40.267 "raid": { 00:12:40.267 "uuid": "7f5ad0eb-c8fd-43ba-8487-4e98bbfd7927", 00:12:40.267 "strip_size_kb": 64, 00:12:40.267 "state": "online", 00:12:40.267 "raid_level": "concat", 00:12:40.267 "superblock": false, 00:12:40.267 "num_base_bdevs": 3, 00:12:40.267 "num_base_bdevs_discovered": 3, 00:12:40.267 "num_base_bdevs_operational": 3, 00:12:40.267 "base_bdevs_list": [ 00:12:40.267 { 00:12:40.267 "name": "NewBaseBdev", 00:12:40.267 "uuid": "14b7a8ab-4a21-4f27-8a51-254279a4974a", 00:12:40.267 "is_configured": true, 00:12:40.267 "data_offset": 0, 00:12:40.267 "data_size": 65536 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "name": "BaseBdev2", 00:12:40.267 "uuid": "6b5ae013-05cb-4c9b-875e-fe228a763f93", 00:12:40.267 "is_configured": true, 00:12:40.267 "data_offset": 0, 00:12:40.267 "data_size": 65536 00:12:40.267 }, 00:12:40.267 { 00:12:40.267 "name": "BaseBdev3", 00:12:40.267 "uuid": "a8196a54-9f04-4550-a3e4-b2b7fa11cd92", 00:12:40.267 "is_configured": true, 00:12:40.267 "data_offset": 0, 00:12:40.267 "data_size": 65536 00:12:40.267 } 00:12:40.267 ] 00:12:40.267 } 00:12:40.267 } 00:12:40.267 }' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.267 BaseBdev2 00:12:40.267 BaseBdev3' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.267 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.268 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.526 [2024-12-05 19:32:33.755396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.526 [2024-12-05 19:32:33.755430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.526 [2024-12-05 19:32:33.755525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.526 [2024-12-05 19:32:33.755600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.526 [2024-12-05 19:32:33.755633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65619 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65619 ']' 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65619 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65619 00:12:40.526 killing process with pid 65619 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65619' 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65619 00:12:40.526 [2024-12-05 19:32:33.795236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.526 19:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65619 00:12:40.785 [2024-12-05 19:32:34.068622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.719 19:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:41.719 00:12:41.719 real 0m11.850s 00:12:41.719 user 0m19.629s 00:12:41.719 sys 0m1.625s 00:12:41.719 19:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.719 ************************************ 00:12:41.719 END TEST raid_state_function_test 00:12:41.719 19:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.719 ************************************ 00:12:41.978 19:32:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:41.978 19:32:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:41.978 19:32:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.978 19:32:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.978 ************************************ 00:12:41.978 START TEST raid_state_function_test_sb 00:12:41.978 ************************************ 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:41.978 Process raid pid: 66256 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66256 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66256' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66256 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66256 ']' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.978 19:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.978 [2024-12-05 19:32:35.284343] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:41.978 [2024-12-05 19:32:35.284502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.236 [2024-12-05 19:32:35.466923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.236 [2024-12-05 19:32:35.621571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.562 [2024-12-05 19:32:35.846887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.562 [2024-12-05 19:32:35.846932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.821 [2024-12-05 19:32:36.247767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.821 [2024-12-05 19:32:36.247835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.821 [2024-12-05 19:32:36.247853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.821 [2024-12-05 19:32:36.247870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.821 [2024-12-05 19:32:36.247881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.821 [2024-12-05 19:32:36.247896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.821 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.080 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.080 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.080 "name": "Existed_Raid", 00:12:43.080 "uuid": "568ed862-80a1-4586-b7ff-6bb04051ac4f", 00:12:43.080 "strip_size_kb": 64, 00:12:43.080 "state": "configuring", 00:12:43.080 "raid_level": "concat", 00:12:43.080 "superblock": true, 00:12:43.080 "num_base_bdevs": 3, 00:12:43.080 "num_base_bdevs_discovered": 0, 00:12:43.080 "num_base_bdevs_operational": 3, 00:12:43.080 "base_bdevs_list": [ 00:12:43.080 { 00:12:43.080 "name": "BaseBdev1", 00:12:43.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.080 "is_configured": false, 00:12:43.080 "data_offset": 0, 00:12:43.080 "data_size": 0 00:12:43.080 }, 00:12:43.080 { 00:12:43.080 "name": "BaseBdev2", 00:12:43.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.080 "is_configured": false, 00:12:43.080 "data_offset": 0, 00:12:43.080 "data_size": 0 00:12:43.080 }, 00:12:43.080 { 00:12:43.080 "name": "BaseBdev3", 00:12:43.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.080 "is_configured": false, 00:12:43.080 "data_offset": 0, 00:12:43.080 "data_size": 0 00:12:43.080 } 00:12:43.080 ] 00:12:43.080 }' 00:12:43.080 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.080 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.338 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.338 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.338 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.597 [2024-12-05 19:32:36.783852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.597 [2024-12-05 19:32:36.784030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.597 [2024-12-05 19:32:36.791846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.597 [2024-12-05 19:32:36.791901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.597 [2024-12-05 19:32:36.791917] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.597 [2024-12-05 19:32:36.791934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.597 [2024-12-05 19:32:36.791944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.597 [2024-12-05 19:32:36.791959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.597 [2024-12-05 19:32:36.836930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.597 BaseBdev1 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.597 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.598 [ 00:12:43.598 { 00:12:43.598 "name": "BaseBdev1", 00:12:43.598 "aliases": [ 00:12:43.598 "1899834d-e274-4e5c-9976-38810c70f141" 00:12:43.598 ], 00:12:43.598 "product_name": "Malloc disk", 00:12:43.598 "block_size": 512, 00:12:43.598 "num_blocks": 65536, 00:12:43.598 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:43.598 "assigned_rate_limits": { 00:12:43.598 "rw_ios_per_sec": 0, 00:12:43.598 "rw_mbytes_per_sec": 0, 00:12:43.598 "r_mbytes_per_sec": 0, 00:12:43.598 "w_mbytes_per_sec": 0 00:12:43.598 }, 00:12:43.598 "claimed": true, 00:12:43.598 "claim_type": "exclusive_write", 00:12:43.598 "zoned": false, 00:12:43.598 "supported_io_types": { 00:12:43.598 "read": true, 00:12:43.598 "write": true, 00:12:43.598 "unmap": true, 00:12:43.598 "flush": true, 00:12:43.598 "reset": true, 00:12:43.598 "nvme_admin": false, 00:12:43.598 "nvme_io": false, 00:12:43.598 "nvme_io_md": false, 00:12:43.598 "write_zeroes": true, 00:12:43.598 "zcopy": true, 00:12:43.598 "get_zone_info": false, 00:12:43.598 "zone_management": false, 00:12:43.598 "zone_append": false, 00:12:43.598 "compare": false, 00:12:43.598 "compare_and_write": false, 00:12:43.598 "abort": true, 00:12:43.598 "seek_hole": false, 00:12:43.598 "seek_data": false, 00:12:43.598 "copy": true, 00:12:43.598 "nvme_iov_md": false 00:12:43.598 }, 00:12:43.598 "memory_domains": [ 00:12:43.598 { 00:12:43.598 "dma_device_id": "system", 00:12:43.598 "dma_device_type": 1 00:12:43.598 }, 00:12:43.598 { 00:12:43.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.598 "dma_device_type": 2 00:12:43.598 } 00:12:43.598 ], 00:12:43.598 "driver_specific": {} 00:12:43.598 } 00:12:43.598 ] 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.598 "name": "Existed_Raid", 00:12:43.598 "uuid": "3a9318dd-1045-4293-9bd5-5a949572facf", 00:12:43.598 "strip_size_kb": 64, 00:12:43.598 "state": "configuring", 00:12:43.598 "raid_level": "concat", 00:12:43.598 "superblock": true, 00:12:43.598 "num_base_bdevs": 3, 00:12:43.598 "num_base_bdevs_discovered": 1, 00:12:43.598 "num_base_bdevs_operational": 3, 00:12:43.598 "base_bdevs_list": [ 00:12:43.598 { 00:12:43.598 "name": "BaseBdev1", 00:12:43.598 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:43.598 "is_configured": true, 00:12:43.598 "data_offset": 2048, 00:12:43.598 "data_size": 63488 00:12:43.598 }, 00:12:43.598 { 00:12:43.598 "name": "BaseBdev2", 00:12:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.598 "is_configured": false, 00:12:43.598 "data_offset": 0, 00:12:43.598 "data_size": 0 00:12:43.598 }, 00:12:43.598 { 00:12:43.598 "name": "BaseBdev3", 00:12:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.598 "is_configured": false, 00:12:43.598 "data_offset": 0, 00:12:43.598 "data_size": 0 00:12:43.598 } 00:12:43.598 ] 00:12:43.598 }' 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.598 19:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.165 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.165 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.165 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.165 [2024-12-05 19:32:37.389165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.166 [2024-12-05 19:32:37.389229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.166 [2024-12-05 19:32:37.397218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.166 [2024-12-05 19:32:37.399664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.166 [2024-12-05 19:32:37.399730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.166 [2024-12-05 19:32:37.399748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.166 [2024-12-05 19:32:37.399765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.166 "name": "Existed_Raid", 00:12:44.166 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:44.166 "strip_size_kb": 64, 00:12:44.166 "state": "configuring", 00:12:44.166 "raid_level": "concat", 00:12:44.166 "superblock": true, 00:12:44.166 "num_base_bdevs": 3, 00:12:44.166 "num_base_bdevs_discovered": 1, 00:12:44.166 "num_base_bdevs_operational": 3, 00:12:44.166 "base_bdevs_list": [ 00:12:44.166 { 00:12:44.166 "name": "BaseBdev1", 00:12:44.166 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:44.166 "is_configured": true, 00:12:44.166 "data_offset": 2048, 00:12:44.166 "data_size": 63488 00:12:44.166 }, 00:12:44.166 { 00:12:44.166 "name": "BaseBdev2", 00:12:44.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.166 "is_configured": false, 00:12:44.166 "data_offset": 0, 00:12:44.166 "data_size": 0 00:12:44.166 }, 00:12:44.166 { 00:12:44.166 "name": "BaseBdev3", 00:12:44.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.166 "is_configured": false, 00:12:44.166 "data_offset": 0, 00:12:44.166 "data_size": 0 00:12:44.166 } 00:12:44.166 ] 00:12:44.166 }' 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.166 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.733 [2024-12-05 19:32:37.943600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.733 BaseBdev2 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.733 [ 00:12:44.733 { 00:12:44.733 "name": "BaseBdev2", 00:12:44.733 "aliases": [ 00:12:44.733 "a5578666-5aca-431c-a461-5aba8807d904" 00:12:44.733 ], 00:12:44.733 "product_name": "Malloc disk", 00:12:44.733 "block_size": 512, 00:12:44.733 "num_blocks": 65536, 00:12:44.733 "uuid": "a5578666-5aca-431c-a461-5aba8807d904", 00:12:44.733 "assigned_rate_limits": { 00:12:44.733 "rw_ios_per_sec": 0, 00:12:44.733 "rw_mbytes_per_sec": 0, 00:12:44.733 "r_mbytes_per_sec": 0, 00:12:44.733 "w_mbytes_per_sec": 0 00:12:44.733 }, 00:12:44.733 "claimed": true, 00:12:44.733 "claim_type": "exclusive_write", 00:12:44.733 "zoned": false, 00:12:44.733 "supported_io_types": { 00:12:44.733 "read": true, 00:12:44.733 "write": true, 00:12:44.733 "unmap": true, 00:12:44.733 "flush": true, 00:12:44.733 "reset": true, 00:12:44.733 "nvme_admin": false, 00:12:44.733 "nvme_io": false, 00:12:44.733 "nvme_io_md": false, 00:12:44.733 "write_zeroes": true, 00:12:44.733 "zcopy": true, 00:12:44.733 "get_zone_info": false, 00:12:44.733 "zone_management": false, 00:12:44.733 "zone_append": false, 00:12:44.733 "compare": false, 00:12:44.733 "compare_and_write": false, 00:12:44.733 "abort": true, 00:12:44.733 "seek_hole": false, 00:12:44.733 "seek_data": false, 00:12:44.733 "copy": true, 00:12:44.733 "nvme_iov_md": false 00:12:44.733 }, 00:12:44.733 "memory_domains": [ 00:12:44.733 { 00:12:44.733 "dma_device_id": "system", 00:12:44.733 "dma_device_type": 1 00:12:44.733 }, 00:12:44.733 { 00:12:44.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.733 "dma_device_type": 2 00:12:44.733 } 00:12:44.733 ], 00:12:44.733 "driver_specific": {} 00:12:44.733 } 00:12:44.733 ] 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.733 19:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.733 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.733 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.733 "name": "Existed_Raid", 00:12:44.733 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:44.733 "strip_size_kb": 64, 00:12:44.733 "state": "configuring", 00:12:44.733 "raid_level": "concat", 00:12:44.733 "superblock": true, 00:12:44.733 "num_base_bdevs": 3, 00:12:44.733 "num_base_bdevs_discovered": 2, 00:12:44.733 "num_base_bdevs_operational": 3, 00:12:44.733 "base_bdevs_list": [ 00:12:44.733 { 00:12:44.733 "name": "BaseBdev1", 00:12:44.733 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:44.733 "is_configured": true, 00:12:44.733 "data_offset": 2048, 00:12:44.733 "data_size": 63488 00:12:44.733 }, 00:12:44.733 { 00:12:44.733 "name": "BaseBdev2", 00:12:44.733 "uuid": "a5578666-5aca-431c-a461-5aba8807d904", 00:12:44.733 "is_configured": true, 00:12:44.733 "data_offset": 2048, 00:12:44.733 "data_size": 63488 00:12:44.733 }, 00:12:44.733 { 00:12:44.733 "name": "BaseBdev3", 00:12:44.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.733 "is_configured": false, 00:12:44.733 "data_offset": 0, 00:12:44.733 "data_size": 0 00:12:44.733 } 00:12:44.733 ] 00:12:44.733 }' 00:12:44.733 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.733 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.300 [2024-12-05 19:32:38.536968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.300 [2024-12-05 19:32:38.537460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:45.300 BaseBdev3 00:12:45.300 [2024-12-05 19:32:38.537611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:45.300 [2024-12-05 19:32:38.538131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.300 [2024-12-05 19:32:38.538356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:45.300 [2024-12-05 19:32:38.538375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:45.300 [2024-12-05 19:32:38.538558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.300 [ 00:12:45.300 { 00:12:45.300 "name": "BaseBdev3", 00:12:45.300 "aliases": [ 00:12:45.300 "7e6ada0c-52fe-46b5-9687-b1e7d3152664" 00:12:45.300 ], 00:12:45.300 "product_name": "Malloc disk", 00:12:45.300 "block_size": 512, 00:12:45.300 "num_blocks": 65536, 00:12:45.300 "uuid": "7e6ada0c-52fe-46b5-9687-b1e7d3152664", 00:12:45.300 "assigned_rate_limits": { 00:12:45.300 "rw_ios_per_sec": 0, 00:12:45.300 "rw_mbytes_per_sec": 0, 00:12:45.300 "r_mbytes_per_sec": 0, 00:12:45.300 "w_mbytes_per_sec": 0 00:12:45.300 }, 00:12:45.300 "claimed": true, 00:12:45.300 "claim_type": "exclusive_write", 00:12:45.300 "zoned": false, 00:12:45.300 "supported_io_types": { 00:12:45.300 "read": true, 00:12:45.300 "write": true, 00:12:45.300 "unmap": true, 00:12:45.300 "flush": true, 00:12:45.300 "reset": true, 00:12:45.300 "nvme_admin": false, 00:12:45.300 "nvme_io": false, 00:12:45.300 "nvme_io_md": false, 00:12:45.300 "write_zeroes": true, 00:12:45.300 "zcopy": true, 00:12:45.300 "get_zone_info": false, 00:12:45.300 "zone_management": false, 00:12:45.300 "zone_append": false, 00:12:45.300 "compare": false, 00:12:45.300 "compare_and_write": false, 00:12:45.300 "abort": true, 00:12:45.300 "seek_hole": false, 00:12:45.300 "seek_data": false, 00:12:45.300 "copy": true, 00:12:45.300 "nvme_iov_md": false 00:12:45.300 }, 00:12:45.300 "memory_domains": [ 00:12:45.300 { 00:12:45.300 "dma_device_id": "system", 00:12:45.300 "dma_device_type": 1 00:12:45.300 }, 00:12:45.300 { 00:12:45.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.300 "dma_device_type": 2 00:12:45.300 } 00:12:45.300 ], 00:12:45.300 "driver_specific": {} 00:12:45.300 } 00:12:45.300 ] 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.300 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.301 "name": "Existed_Raid", 00:12:45.301 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:45.301 "strip_size_kb": 64, 00:12:45.301 "state": "online", 00:12:45.301 "raid_level": "concat", 00:12:45.301 "superblock": true, 00:12:45.301 "num_base_bdevs": 3, 00:12:45.301 "num_base_bdevs_discovered": 3, 00:12:45.301 "num_base_bdevs_operational": 3, 00:12:45.301 "base_bdevs_list": [ 00:12:45.301 { 00:12:45.301 "name": "BaseBdev1", 00:12:45.301 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:45.301 "is_configured": true, 00:12:45.301 "data_offset": 2048, 00:12:45.301 "data_size": 63488 00:12:45.301 }, 00:12:45.301 { 00:12:45.301 "name": "BaseBdev2", 00:12:45.301 "uuid": "a5578666-5aca-431c-a461-5aba8807d904", 00:12:45.301 "is_configured": true, 00:12:45.301 "data_offset": 2048, 00:12:45.301 "data_size": 63488 00:12:45.301 }, 00:12:45.301 { 00:12:45.301 "name": "BaseBdev3", 00:12:45.301 "uuid": "7e6ada0c-52fe-46b5-9687-b1e7d3152664", 00:12:45.301 "is_configured": true, 00:12:45.301 "data_offset": 2048, 00:12:45.301 "data_size": 63488 00:12:45.301 } 00:12:45.301 ] 00:12:45.301 }' 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.301 19:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.868 [2024-12-05 19:32:39.081547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.868 "name": "Existed_Raid", 00:12:45.868 "aliases": [ 00:12:45.868 "2075c3f5-de38-435c-963f-d374880d34fd" 00:12:45.868 ], 00:12:45.868 "product_name": "Raid Volume", 00:12:45.868 "block_size": 512, 00:12:45.868 "num_blocks": 190464, 00:12:45.868 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:45.868 "assigned_rate_limits": { 00:12:45.868 "rw_ios_per_sec": 0, 00:12:45.868 "rw_mbytes_per_sec": 0, 00:12:45.868 "r_mbytes_per_sec": 0, 00:12:45.868 "w_mbytes_per_sec": 0 00:12:45.868 }, 00:12:45.868 "claimed": false, 00:12:45.868 "zoned": false, 00:12:45.868 "supported_io_types": { 00:12:45.868 "read": true, 00:12:45.868 "write": true, 00:12:45.868 "unmap": true, 00:12:45.868 "flush": true, 00:12:45.868 "reset": true, 00:12:45.868 "nvme_admin": false, 00:12:45.868 "nvme_io": false, 00:12:45.868 "nvme_io_md": false, 00:12:45.868 "write_zeroes": true, 00:12:45.868 "zcopy": false, 00:12:45.868 "get_zone_info": false, 00:12:45.868 "zone_management": false, 00:12:45.868 "zone_append": false, 00:12:45.868 "compare": false, 00:12:45.868 "compare_and_write": false, 00:12:45.868 "abort": false, 00:12:45.868 "seek_hole": false, 00:12:45.868 "seek_data": false, 00:12:45.868 "copy": false, 00:12:45.868 "nvme_iov_md": false 00:12:45.868 }, 00:12:45.868 "memory_domains": [ 00:12:45.868 { 00:12:45.868 "dma_device_id": "system", 00:12:45.868 "dma_device_type": 1 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.868 "dma_device_type": 2 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "dma_device_id": "system", 00:12:45.868 "dma_device_type": 1 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.868 "dma_device_type": 2 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "dma_device_id": "system", 00:12:45.868 "dma_device_type": 1 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.868 "dma_device_type": 2 00:12:45.868 } 00:12:45.868 ], 00:12:45.868 "driver_specific": { 00:12:45.868 "raid": { 00:12:45.868 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:45.868 "strip_size_kb": 64, 00:12:45.868 "state": "online", 00:12:45.868 "raid_level": "concat", 00:12:45.868 "superblock": true, 00:12:45.868 "num_base_bdevs": 3, 00:12:45.868 "num_base_bdevs_discovered": 3, 00:12:45.868 "num_base_bdevs_operational": 3, 00:12:45.868 "base_bdevs_list": [ 00:12:45.868 { 00:12:45.868 "name": "BaseBdev1", 00:12:45.868 "uuid": "1899834d-e274-4e5c-9976-38810c70f141", 00:12:45.868 "is_configured": true, 00:12:45.868 "data_offset": 2048, 00:12:45.868 "data_size": 63488 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "name": "BaseBdev2", 00:12:45.868 "uuid": "a5578666-5aca-431c-a461-5aba8807d904", 00:12:45.868 "is_configured": true, 00:12:45.868 "data_offset": 2048, 00:12:45.868 "data_size": 63488 00:12:45.868 }, 00:12:45.868 { 00:12:45.868 "name": "BaseBdev3", 00:12:45.868 "uuid": "7e6ada0c-52fe-46b5-9687-b1e7d3152664", 00:12:45.868 "is_configured": true, 00:12:45.868 "data_offset": 2048, 00:12:45.868 "data_size": 63488 00:12:45.868 } 00:12:45.868 ] 00:12:45.868 } 00:12:45.868 } 00:12:45.868 }' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:45.868 BaseBdev2 00:12:45.868 BaseBdev3' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.868 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.869 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.869 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.149 [2024-12-05 19:32:39.409327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.149 [2024-12-05 19:32:39.409361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.149 [2024-12-05 19:32:39.409461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.149 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.150 "name": "Existed_Raid", 00:12:46.150 "uuid": "2075c3f5-de38-435c-963f-d374880d34fd", 00:12:46.150 "strip_size_kb": 64, 00:12:46.150 "state": "offline", 00:12:46.150 "raid_level": "concat", 00:12:46.150 "superblock": true, 00:12:46.150 "num_base_bdevs": 3, 00:12:46.150 "num_base_bdevs_discovered": 2, 00:12:46.150 "num_base_bdevs_operational": 2, 00:12:46.150 "base_bdevs_list": [ 00:12:46.150 { 00:12:46.150 "name": null, 00:12:46.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.150 "is_configured": false, 00:12:46.150 "data_offset": 0, 00:12:46.150 "data_size": 63488 00:12:46.150 }, 00:12:46.150 { 00:12:46.150 "name": "BaseBdev2", 00:12:46.150 "uuid": "a5578666-5aca-431c-a461-5aba8807d904", 00:12:46.150 "is_configured": true, 00:12:46.150 "data_offset": 2048, 00:12:46.150 "data_size": 63488 00:12:46.150 }, 00:12:46.150 { 00:12:46.150 "name": "BaseBdev3", 00:12:46.150 "uuid": "7e6ada0c-52fe-46b5-9687-b1e7d3152664", 00:12:46.150 "is_configured": true, 00:12:46.150 "data_offset": 2048, 00:12:46.150 "data_size": 63488 00:12:46.150 } 00:12:46.150 ] 00:12:46.150 }' 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.150 19:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 [2024-12-05 19:32:40.068685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.717 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.976 [2024-12-05 19:32:40.208284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.976 [2024-12-05 19:32:40.208346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:46.976 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.977 BaseBdev2 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.977 [ 00:12:46.977 { 00:12:46.977 "name": "BaseBdev2", 00:12:46.977 "aliases": [ 00:12:46.977 "7c8cb78f-e2e9-459e-8f77-3c30d1321d92" 00:12:46.977 ], 00:12:46.977 "product_name": "Malloc disk", 00:12:46.977 "block_size": 512, 00:12:46.977 "num_blocks": 65536, 00:12:46.977 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:46.977 "assigned_rate_limits": { 00:12:46.977 "rw_ios_per_sec": 0, 00:12:46.977 "rw_mbytes_per_sec": 0, 00:12:46.977 "r_mbytes_per_sec": 0, 00:12:46.977 "w_mbytes_per_sec": 0 00:12:46.977 }, 00:12:46.977 "claimed": false, 00:12:46.977 "zoned": false, 00:12:46.977 "supported_io_types": { 00:12:46.977 "read": true, 00:12:46.977 "write": true, 00:12:46.977 "unmap": true, 00:12:46.977 "flush": true, 00:12:46.977 "reset": true, 00:12:46.977 "nvme_admin": false, 00:12:46.977 "nvme_io": false, 00:12:46.977 "nvme_io_md": false, 00:12:46.977 "write_zeroes": true, 00:12:46.977 "zcopy": true, 00:12:46.977 "get_zone_info": false, 00:12:46.977 "zone_management": false, 00:12:46.977 "zone_append": false, 00:12:46.977 "compare": false, 00:12:46.977 "compare_and_write": false, 00:12:46.977 "abort": true, 00:12:46.977 "seek_hole": false, 00:12:46.977 "seek_data": false, 00:12:46.977 "copy": true, 00:12:46.977 "nvme_iov_md": false 00:12:46.977 }, 00:12:46.977 "memory_domains": [ 00:12:46.977 { 00:12:46.977 "dma_device_id": "system", 00:12:46.977 "dma_device_type": 1 00:12:46.977 }, 00:12:46.977 { 00:12:46.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.977 "dma_device_type": 2 00:12:46.977 } 00:12:46.977 ], 00:12:46.977 "driver_specific": {} 00:12:46.977 } 00:12:46.977 ] 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.977 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 BaseBdev3 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 [ 00:12:47.236 { 00:12:47.236 "name": "BaseBdev3", 00:12:47.236 "aliases": [ 00:12:47.236 "3c888413-fb7f-4d3b-9f48-129b94769285" 00:12:47.236 ], 00:12:47.236 "product_name": "Malloc disk", 00:12:47.236 "block_size": 512, 00:12:47.236 "num_blocks": 65536, 00:12:47.236 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:47.236 "assigned_rate_limits": { 00:12:47.236 "rw_ios_per_sec": 0, 00:12:47.236 "rw_mbytes_per_sec": 0, 00:12:47.236 "r_mbytes_per_sec": 0, 00:12:47.236 "w_mbytes_per_sec": 0 00:12:47.236 }, 00:12:47.236 "claimed": false, 00:12:47.236 "zoned": false, 00:12:47.236 "supported_io_types": { 00:12:47.236 "read": true, 00:12:47.236 "write": true, 00:12:47.236 "unmap": true, 00:12:47.236 "flush": true, 00:12:47.236 "reset": true, 00:12:47.236 "nvme_admin": false, 00:12:47.236 "nvme_io": false, 00:12:47.236 "nvme_io_md": false, 00:12:47.236 "write_zeroes": true, 00:12:47.236 "zcopy": true, 00:12:47.236 "get_zone_info": false, 00:12:47.236 "zone_management": false, 00:12:47.236 "zone_append": false, 00:12:47.236 "compare": false, 00:12:47.236 "compare_and_write": false, 00:12:47.236 "abort": true, 00:12:47.236 "seek_hole": false, 00:12:47.236 "seek_data": false, 00:12:47.236 "copy": true, 00:12:47.236 "nvme_iov_md": false 00:12:47.236 }, 00:12:47.236 "memory_domains": [ 00:12:47.236 { 00:12:47.236 "dma_device_id": "system", 00:12:47.236 "dma_device_type": 1 00:12:47.236 }, 00:12:47.236 { 00:12:47.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.236 "dma_device_type": 2 00:12:47.236 } 00:12:47.236 ], 00:12:47.236 "driver_specific": {} 00:12:47.236 } 00:12:47.236 ] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 [2024-12-05 19:32:40.497801] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.236 [2024-12-05 19:32:40.497855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.236 [2024-12-05 19:32:40.497903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.236 [2024-12-05 19:32:40.500341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.236 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.237 "name": "Existed_Raid", 00:12:47.237 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:47.237 "strip_size_kb": 64, 00:12:47.237 "state": "configuring", 00:12:47.237 "raid_level": "concat", 00:12:47.237 "superblock": true, 00:12:47.237 "num_base_bdevs": 3, 00:12:47.237 "num_base_bdevs_discovered": 2, 00:12:47.237 "num_base_bdevs_operational": 3, 00:12:47.237 "base_bdevs_list": [ 00:12:47.237 { 00:12:47.237 "name": "BaseBdev1", 00:12:47.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.237 "is_configured": false, 00:12:47.237 "data_offset": 0, 00:12:47.237 "data_size": 0 00:12:47.237 }, 00:12:47.237 { 00:12:47.237 "name": "BaseBdev2", 00:12:47.237 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:47.237 "is_configured": true, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 }, 00:12:47.237 { 00:12:47.237 "name": "BaseBdev3", 00:12:47.237 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:47.237 "is_configured": true, 00:12:47.237 "data_offset": 2048, 00:12:47.237 "data_size": 63488 00:12:47.237 } 00:12:47.237 ] 00:12:47.237 }' 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.237 19:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.804 [2024-12-05 19:32:41.018006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.804 "name": "Existed_Raid", 00:12:47.804 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:47.804 "strip_size_kb": 64, 00:12:47.804 "state": "configuring", 00:12:47.804 "raid_level": "concat", 00:12:47.804 "superblock": true, 00:12:47.804 "num_base_bdevs": 3, 00:12:47.804 "num_base_bdevs_discovered": 1, 00:12:47.804 "num_base_bdevs_operational": 3, 00:12:47.804 "base_bdevs_list": [ 00:12:47.804 { 00:12:47.804 "name": "BaseBdev1", 00:12:47.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.804 "is_configured": false, 00:12:47.804 "data_offset": 0, 00:12:47.804 "data_size": 0 00:12:47.804 }, 00:12:47.804 { 00:12:47.804 "name": null, 00:12:47.804 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:47.804 "is_configured": false, 00:12:47.804 "data_offset": 0, 00:12:47.804 "data_size": 63488 00:12:47.804 }, 00:12:47.804 { 00:12:47.804 "name": "BaseBdev3", 00:12:47.804 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:47.804 "is_configured": true, 00:12:47.804 "data_offset": 2048, 00:12:47.804 "data_size": 63488 00:12:47.804 } 00:12:47.804 ] 00:12:47.804 }' 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.804 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.371 [2024-12-05 19:32:41.641466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.371 BaseBdev1 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.371 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.371 [ 00:12:48.371 { 00:12:48.371 "name": "BaseBdev1", 00:12:48.371 "aliases": [ 00:12:48.371 "1f3ab517-41d4-473a-9d11-676b4cda088c" 00:12:48.371 ], 00:12:48.371 "product_name": "Malloc disk", 00:12:48.371 "block_size": 512, 00:12:48.372 "num_blocks": 65536, 00:12:48.372 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:48.372 "assigned_rate_limits": { 00:12:48.372 "rw_ios_per_sec": 0, 00:12:48.372 "rw_mbytes_per_sec": 0, 00:12:48.372 "r_mbytes_per_sec": 0, 00:12:48.372 "w_mbytes_per_sec": 0 00:12:48.372 }, 00:12:48.372 "claimed": true, 00:12:48.372 "claim_type": "exclusive_write", 00:12:48.372 "zoned": false, 00:12:48.372 "supported_io_types": { 00:12:48.372 "read": true, 00:12:48.372 "write": true, 00:12:48.372 "unmap": true, 00:12:48.372 "flush": true, 00:12:48.372 "reset": true, 00:12:48.372 "nvme_admin": false, 00:12:48.372 "nvme_io": false, 00:12:48.372 "nvme_io_md": false, 00:12:48.372 "write_zeroes": true, 00:12:48.372 "zcopy": true, 00:12:48.372 "get_zone_info": false, 00:12:48.372 "zone_management": false, 00:12:48.372 "zone_append": false, 00:12:48.372 "compare": false, 00:12:48.372 "compare_and_write": false, 00:12:48.372 "abort": true, 00:12:48.372 "seek_hole": false, 00:12:48.372 "seek_data": false, 00:12:48.372 "copy": true, 00:12:48.372 "nvme_iov_md": false 00:12:48.372 }, 00:12:48.372 "memory_domains": [ 00:12:48.372 { 00:12:48.372 "dma_device_id": "system", 00:12:48.372 "dma_device_type": 1 00:12:48.372 }, 00:12:48.372 { 00:12:48.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.372 "dma_device_type": 2 00:12:48.372 } 00:12:48.372 ], 00:12:48.372 "driver_specific": {} 00:12:48.372 } 00:12:48.372 ] 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.372 "name": "Existed_Raid", 00:12:48.372 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:48.372 "strip_size_kb": 64, 00:12:48.372 "state": "configuring", 00:12:48.372 "raid_level": "concat", 00:12:48.372 "superblock": true, 00:12:48.372 "num_base_bdevs": 3, 00:12:48.372 "num_base_bdevs_discovered": 2, 00:12:48.372 "num_base_bdevs_operational": 3, 00:12:48.372 "base_bdevs_list": [ 00:12:48.372 { 00:12:48.372 "name": "BaseBdev1", 00:12:48.372 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:48.372 "is_configured": true, 00:12:48.372 "data_offset": 2048, 00:12:48.372 "data_size": 63488 00:12:48.372 }, 00:12:48.372 { 00:12:48.372 "name": null, 00:12:48.372 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:48.372 "is_configured": false, 00:12:48.372 "data_offset": 0, 00:12:48.372 "data_size": 63488 00:12:48.372 }, 00:12:48.372 { 00:12:48.372 "name": "BaseBdev3", 00:12:48.372 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:48.372 "is_configured": true, 00:12:48.372 "data_offset": 2048, 00:12:48.372 "data_size": 63488 00:12:48.372 } 00:12:48.372 ] 00:12:48.372 }' 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.372 19:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.940 [2024-12-05 19:32:42.249730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.940 "name": "Existed_Raid", 00:12:48.940 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:48.940 "strip_size_kb": 64, 00:12:48.940 "state": "configuring", 00:12:48.940 "raid_level": "concat", 00:12:48.940 "superblock": true, 00:12:48.940 "num_base_bdevs": 3, 00:12:48.940 "num_base_bdevs_discovered": 1, 00:12:48.940 "num_base_bdevs_operational": 3, 00:12:48.940 "base_bdevs_list": [ 00:12:48.940 { 00:12:48.940 "name": "BaseBdev1", 00:12:48.940 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:48.940 "is_configured": true, 00:12:48.940 "data_offset": 2048, 00:12:48.940 "data_size": 63488 00:12:48.940 }, 00:12:48.940 { 00:12:48.940 "name": null, 00:12:48.940 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:48.940 "is_configured": false, 00:12:48.940 "data_offset": 0, 00:12:48.940 "data_size": 63488 00:12:48.940 }, 00:12:48.940 { 00:12:48.940 "name": null, 00:12:48.940 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:48.940 "is_configured": false, 00:12:48.940 "data_offset": 0, 00:12:48.940 "data_size": 63488 00:12:48.940 } 00:12:48.940 ] 00:12:48.940 }' 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.940 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 [2024-12-05 19:32:42.857967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.509 "name": "Existed_Raid", 00:12:49.509 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:49.509 "strip_size_kb": 64, 00:12:49.509 "state": "configuring", 00:12:49.509 "raid_level": "concat", 00:12:49.509 "superblock": true, 00:12:49.509 "num_base_bdevs": 3, 00:12:49.509 "num_base_bdevs_discovered": 2, 00:12:49.509 "num_base_bdevs_operational": 3, 00:12:49.509 "base_bdevs_list": [ 00:12:49.509 { 00:12:49.509 "name": "BaseBdev1", 00:12:49.509 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:49.509 "is_configured": true, 00:12:49.509 "data_offset": 2048, 00:12:49.509 "data_size": 63488 00:12:49.509 }, 00:12:49.509 { 00:12:49.509 "name": null, 00:12:49.509 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:49.509 "is_configured": false, 00:12:49.509 "data_offset": 0, 00:12:49.509 "data_size": 63488 00:12:49.509 }, 00:12:49.509 { 00:12:49.509 "name": "BaseBdev3", 00:12:49.509 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:49.509 "is_configured": true, 00:12:49.509 "data_offset": 2048, 00:12:49.509 "data_size": 63488 00:12:49.509 } 00:12:49.509 ] 00:12:49.509 }' 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.509 19:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.076 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.076 [2024-12-05 19:32:43.442190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.335 "name": "Existed_Raid", 00:12:50.335 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:50.335 "strip_size_kb": 64, 00:12:50.335 "state": "configuring", 00:12:50.335 "raid_level": "concat", 00:12:50.335 "superblock": true, 00:12:50.335 "num_base_bdevs": 3, 00:12:50.335 "num_base_bdevs_discovered": 1, 00:12:50.335 "num_base_bdevs_operational": 3, 00:12:50.335 "base_bdevs_list": [ 00:12:50.335 { 00:12:50.335 "name": null, 00:12:50.335 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:50.335 "is_configured": false, 00:12:50.335 "data_offset": 0, 00:12:50.335 "data_size": 63488 00:12:50.335 }, 00:12:50.335 { 00:12:50.335 "name": null, 00:12:50.335 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:50.335 "is_configured": false, 00:12:50.335 "data_offset": 0, 00:12:50.335 "data_size": 63488 00:12:50.335 }, 00:12:50.335 { 00:12:50.335 "name": "BaseBdev3", 00:12:50.335 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:50.335 "is_configured": true, 00:12:50.335 "data_offset": 2048, 00:12:50.335 "data_size": 63488 00:12:50.335 } 00:12:50.335 ] 00:12:50.335 }' 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.335 19:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 [2024-12-05 19:32:44.122253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.902 "name": "Existed_Raid", 00:12:50.902 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:50.902 "strip_size_kb": 64, 00:12:50.902 "state": "configuring", 00:12:50.902 "raid_level": "concat", 00:12:50.902 "superblock": true, 00:12:50.902 "num_base_bdevs": 3, 00:12:50.902 "num_base_bdevs_discovered": 2, 00:12:50.902 "num_base_bdevs_operational": 3, 00:12:50.902 "base_bdevs_list": [ 00:12:50.902 { 00:12:50.902 "name": null, 00:12:50.902 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:50.902 "is_configured": false, 00:12:50.902 "data_offset": 0, 00:12:50.902 "data_size": 63488 00:12:50.902 }, 00:12:50.902 { 00:12:50.902 "name": "BaseBdev2", 00:12:50.902 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:50.902 "is_configured": true, 00:12:50.902 "data_offset": 2048, 00:12:50.902 "data_size": 63488 00:12:50.902 }, 00:12:50.902 { 00:12:50.902 "name": "BaseBdev3", 00:12:50.902 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:50.902 "is_configured": true, 00:12:50.902 "data_offset": 2048, 00:12:50.902 "data_size": 63488 00:12:50.902 } 00:12:50.902 ] 00:12:50.902 }' 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.902 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1f3ab517-41d4-473a-9d11-676b4cda088c 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 [2024-12-05 19:32:44.792537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.469 [2024-12-05 19:32:44.792853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.469 [2024-12-05 19:32:44.792879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:51.469 NewBaseBdev 00:12:51.469 [2024-12-05 19:32:44.793179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.469 [2024-12-05 19:32:44.793374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.469 [2024-12-05 19:32:44.793391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.469 [2024-12-05 19:32:44.793558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.469 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.469 [ 00:12:51.469 { 00:12:51.469 "name": "NewBaseBdev", 00:12:51.469 "aliases": [ 00:12:51.469 "1f3ab517-41d4-473a-9d11-676b4cda088c" 00:12:51.469 ], 00:12:51.469 "product_name": "Malloc disk", 00:12:51.469 "block_size": 512, 00:12:51.469 "num_blocks": 65536, 00:12:51.469 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:51.469 "assigned_rate_limits": { 00:12:51.469 "rw_ios_per_sec": 0, 00:12:51.469 "rw_mbytes_per_sec": 0, 00:12:51.469 "r_mbytes_per_sec": 0, 00:12:51.469 "w_mbytes_per_sec": 0 00:12:51.469 }, 00:12:51.469 "claimed": true, 00:12:51.469 "claim_type": "exclusive_write", 00:12:51.469 "zoned": false, 00:12:51.469 "supported_io_types": { 00:12:51.469 "read": true, 00:12:51.469 "write": true, 00:12:51.469 "unmap": true, 00:12:51.469 "flush": true, 00:12:51.469 "reset": true, 00:12:51.469 "nvme_admin": false, 00:12:51.469 "nvme_io": false, 00:12:51.469 "nvme_io_md": false, 00:12:51.470 "write_zeroes": true, 00:12:51.470 "zcopy": true, 00:12:51.470 "get_zone_info": false, 00:12:51.470 "zone_management": false, 00:12:51.470 "zone_append": false, 00:12:51.470 "compare": false, 00:12:51.470 "compare_and_write": false, 00:12:51.470 "abort": true, 00:12:51.470 "seek_hole": false, 00:12:51.470 "seek_data": false, 00:12:51.470 "copy": true, 00:12:51.470 "nvme_iov_md": false 00:12:51.470 }, 00:12:51.470 "memory_domains": [ 00:12:51.470 { 00:12:51.470 "dma_device_id": "system", 00:12:51.470 "dma_device_type": 1 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.470 "dma_device_type": 2 00:12:51.470 } 00:12:51.470 ], 00:12:51.470 "driver_specific": {} 00:12:51.470 } 00:12:51.470 ] 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.470 "name": "Existed_Raid", 00:12:51.470 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:51.470 "strip_size_kb": 64, 00:12:51.470 "state": "online", 00:12:51.470 "raid_level": "concat", 00:12:51.470 "superblock": true, 00:12:51.470 "num_base_bdevs": 3, 00:12:51.470 "num_base_bdevs_discovered": 3, 00:12:51.470 "num_base_bdevs_operational": 3, 00:12:51.470 "base_bdevs_list": [ 00:12:51.470 { 00:12:51.470 "name": "NewBaseBdev", 00:12:51.470 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 2048, 00:12:51.470 "data_size": 63488 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "name": "BaseBdev2", 00:12:51.470 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 2048, 00:12:51.470 "data_size": 63488 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "name": "BaseBdev3", 00:12:51.470 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 2048, 00:12:51.470 "data_size": 63488 00:12:51.470 } 00:12:51.470 ] 00:12:51.470 }' 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.470 19:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.035 [2024-12-05 19:32:45.337154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.035 "name": "Existed_Raid", 00:12:52.035 "aliases": [ 00:12:52.035 "836d4e58-f5b5-4853-a074-d6751f40d243" 00:12:52.035 ], 00:12:52.035 "product_name": "Raid Volume", 00:12:52.035 "block_size": 512, 00:12:52.035 "num_blocks": 190464, 00:12:52.035 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:52.035 "assigned_rate_limits": { 00:12:52.035 "rw_ios_per_sec": 0, 00:12:52.035 "rw_mbytes_per_sec": 0, 00:12:52.035 "r_mbytes_per_sec": 0, 00:12:52.035 "w_mbytes_per_sec": 0 00:12:52.035 }, 00:12:52.035 "claimed": false, 00:12:52.035 "zoned": false, 00:12:52.035 "supported_io_types": { 00:12:52.035 "read": true, 00:12:52.035 "write": true, 00:12:52.035 "unmap": true, 00:12:52.035 "flush": true, 00:12:52.035 "reset": true, 00:12:52.035 "nvme_admin": false, 00:12:52.035 "nvme_io": false, 00:12:52.035 "nvme_io_md": false, 00:12:52.035 "write_zeroes": true, 00:12:52.035 "zcopy": false, 00:12:52.035 "get_zone_info": false, 00:12:52.035 "zone_management": false, 00:12:52.035 "zone_append": false, 00:12:52.035 "compare": false, 00:12:52.035 "compare_and_write": false, 00:12:52.035 "abort": false, 00:12:52.035 "seek_hole": false, 00:12:52.035 "seek_data": false, 00:12:52.035 "copy": false, 00:12:52.035 "nvme_iov_md": false 00:12:52.035 }, 00:12:52.035 "memory_domains": [ 00:12:52.035 { 00:12:52.035 "dma_device_id": "system", 00:12:52.035 "dma_device_type": 1 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.035 "dma_device_type": 2 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "dma_device_id": "system", 00:12:52.035 "dma_device_type": 1 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.035 "dma_device_type": 2 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "dma_device_id": "system", 00:12:52.035 "dma_device_type": 1 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.035 "dma_device_type": 2 00:12:52.035 } 00:12:52.035 ], 00:12:52.035 "driver_specific": { 00:12:52.035 "raid": { 00:12:52.035 "uuid": "836d4e58-f5b5-4853-a074-d6751f40d243", 00:12:52.035 "strip_size_kb": 64, 00:12:52.035 "state": "online", 00:12:52.035 "raid_level": "concat", 00:12:52.035 "superblock": true, 00:12:52.035 "num_base_bdevs": 3, 00:12:52.035 "num_base_bdevs_discovered": 3, 00:12:52.035 "num_base_bdevs_operational": 3, 00:12:52.035 "base_bdevs_list": [ 00:12:52.035 { 00:12:52.035 "name": "NewBaseBdev", 00:12:52.035 "uuid": "1f3ab517-41d4-473a-9d11-676b4cda088c", 00:12:52.035 "is_configured": true, 00:12:52.035 "data_offset": 2048, 00:12:52.035 "data_size": 63488 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "name": "BaseBdev2", 00:12:52.035 "uuid": "7c8cb78f-e2e9-459e-8f77-3c30d1321d92", 00:12:52.035 "is_configured": true, 00:12:52.035 "data_offset": 2048, 00:12:52.035 "data_size": 63488 00:12:52.035 }, 00:12:52.035 { 00:12:52.035 "name": "BaseBdev3", 00:12:52.035 "uuid": "3c888413-fb7f-4d3b-9f48-129b94769285", 00:12:52.035 "is_configured": true, 00:12:52.035 "data_offset": 2048, 00:12:52.035 "data_size": 63488 00:12:52.035 } 00:12:52.035 ] 00:12:52.035 } 00:12:52.035 } 00:12:52.035 }' 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:52.035 BaseBdev2 00:12:52.035 BaseBdev3' 00:12:52.035 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.292 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 [2024-12-05 19:32:45.652890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.293 [2024-12-05 19:32:45.652925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.293 [2024-12-05 19:32:45.653017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.293 [2024-12-05 19:32:45.653092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.293 [2024-12-05 19:32:45.653113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66256 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66256 ']' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66256 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66256 00:12:52.293 killing process with pid 66256 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66256' 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66256 00:12:52.293 [2024-12-05 19:32:45.690962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.293 19:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66256 00:12:52.551 [2024-12-05 19:32:45.954841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.925 19:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.925 00:12:53.925 real 0m11.823s 00:12:53.925 user 0m19.694s 00:12:53.925 sys 0m1.543s 00:12:53.925 19:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.925 19:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.925 ************************************ 00:12:53.925 END TEST raid_state_function_test_sb 00:12:53.925 ************************************ 00:12:53.925 19:32:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:53.925 19:32:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.925 19:32:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.925 19:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.925 ************************************ 00:12:53.925 START TEST raid_superblock_test 00:12:53.925 ************************************ 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:53.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66883 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66883 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66883 ']' 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.925 19:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.925 [2024-12-05 19:32:47.182149] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:53.925 [2024-12-05 19:32:47.182324] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66883 ] 00:12:53.925 [2024-12-05 19:32:47.363013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.251 [2024-12-05 19:32:47.493538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.540 [2024-12-05 19:32:47.695457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.540 [2024-12-05 19:32:47.695537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.799 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 malloc1 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 [2024-12-05 19:32:48.273357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.059 [2024-12-05 19:32:48.273575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.059 [2024-12-05 19:32:48.273658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:55.059 [2024-12-05 19:32:48.273859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.059 [2024-12-05 19:32:48.276791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.059 [2024-12-05 19:32:48.276967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.059 pt1 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 malloc2 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 [2024-12-05 19:32:48.325783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.059 [2024-12-05 19:32:48.325855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.059 [2024-12-05 19:32:48.325896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:55.059 [2024-12-05 19:32:48.325912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.059 [2024-12-05 19:32:48.328675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.059 [2024-12-05 19:32:48.328751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.059 pt2 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 malloc3 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 [2024-12-05 19:32:48.389559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.059 [2024-12-05 19:32:48.389630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.059 [2024-12-05 19:32:48.389665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.059 [2024-12-05 19:32:48.389681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.059 [2024-12-05 19:32:48.392549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.059 [2024-12-05 19:32:48.392749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.059 pt3 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.059 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.059 [2024-12-05 19:32:48.397702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.059 [2024-12-05 19:32:48.400176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.059 [2024-12-05 19:32:48.400417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:55.059 [2024-12-05 19:32:48.400645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:55.060 [2024-12-05 19:32:48.400669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:55.060 [2024-12-05 19:32:48.401013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:55.060 [2024-12-05 19:32:48.401224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:55.060 [2024-12-05 19:32:48.401239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:55.060 [2024-12-05 19:32:48.401424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.060 "name": "raid_bdev1", 00:12:55.060 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:55.060 "strip_size_kb": 64, 00:12:55.060 "state": "online", 00:12:55.060 "raid_level": "concat", 00:12:55.060 "superblock": true, 00:12:55.060 "num_base_bdevs": 3, 00:12:55.060 "num_base_bdevs_discovered": 3, 00:12:55.060 "num_base_bdevs_operational": 3, 00:12:55.060 "base_bdevs_list": [ 00:12:55.060 { 00:12:55.060 "name": "pt1", 00:12:55.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.060 "is_configured": true, 00:12:55.060 "data_offset": 2048, 00:12:55.060 "data_size": 63488 00:12:55.060 }, 00:12:55.060 { 00:12:55.060 "name": "pt2", 00:12:55.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.060 "is_configured": true, 00:12:55.060 "data_offset": 2048, 00:12:55.060 "data_size": 63488 00:12:55.060 }, 00:12:55.060 { 00:12:55.060 "name": "pt3", 00:12:55.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.060 "is_configured": true, 00:12:55.060 "data_offset": 2048, 00:12:55.060 "data_size": 63488 00:12:55.060 } 00:12:55.060 ] 00:12:55.060 }' 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.060 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.628 [2024-12-05 19:32:48.902190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.628 "name": "raid_bdev1", 00:12:55.628 "aliases": [ 00:12:55.628 "beec85fc-73c5-447f-9a6a-d24db92a2618" 00:12:55.628 ], 00:12:55.628 "product_name": "Raid Volume", 00:12:55.628 "block_size": 512, 00:12:55.628 "num_blocks": 190464, 00:12:55.628 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:55.628 "assigned_rate_limits": { 00:12:55.628 "rw_ios_per_sec": 0, 00:12:55.628 "rw_mbytes_per_sec": 0, 00:12:55.628 "r_mbytes_per_sec": 0, 00:12:55.628 "w_mbytes_per_sec": 0 00:12:55.628 }, 00:12:55.628 "claimed": false, 00:12:55.628 "zoned": false, 00:12:55.628 "supported_io_types": { 00:12:55.628 "read": true, 00:12:55.628 "write": true, 00:12:55.628 "unmap": true, 00:12:55.628 "flush": true, 00:12:55.628 "reset": true, 00:12:55.628 "nvme_admin": false, 00:12:55.628 "nvme_io": false, 00:12:55.628 "nvme_io_md": false, 00:12:55.628 "write_zeroes": true, 00:12:55.628 "zcopy": false, 00:12:55.628 "get_zone_info": false, 00:12:55.628 "zone_management": false, 00:12:55.628 "zone_append": false, 00:12:55.628 "compare": false, 00:12:55.628 "compare_and_write": false, 00:12:55.628 "abort": false, 00:12:55.628 "seek_hole": false, 00:12:55.628 "seek_data": false, 00:12:55.628 "copy": false, 00:12:55.628 "nvme_iov_md": false 00:12:55.628 }, 00:12:55.628 "memory_domains": [ 00:12:55.628 { 00:12:55.628 "dma_device_id": "system", 00:12:55.628 "dma_device_type": 1 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.628 "dma_device_type": 2 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "dma_device_id": "system", 00:12:55.628 "dma_device_type": 1 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.628 "dma_device_type": 2 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "dma_device_id": "system", 00:12:55.628 "dma_device_type": 1 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.628 "dma_device_type": 2 00:12:55.628 } 00:12:55.628 ], 00:12:55.628 "driver_specific": { 00:12:55.628 "raid": { 00:12:55.628 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:55.628 "strip_size_kb": 64, 00:12:55.628 "state": "online", 00:12:55.628 "raid_level": "concat", 00:12:55.628 "superblock": true, 00:12:55.628 "num_base_bdevs": 3, 00:12:55.628 "num_base_bdevs_discovered": 3, 00:12:55.628 "num_base_bdevs_operational": 3, 00:12:55.628 "base_bdevs_list": [ 00:12:55.628 { 00:12:55.628 "name": "pt1", 00:12:55.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.628 "is_configured": true, 00:12:55.628 "data_offset": 2048, 00:12:55.628 "data_size": 63488 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "name": "pt2", 00:12:55.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.628 "is_configured": true, 00:12:55.628 "data_offset": 2048, 00:12:55.628 "data_size": 63488 00:12:55.628 }, 00:12:55.628 { 00:12:55.628 "name": "pt3", 00:12:55.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.628 "is_configured": true, 00:12:55.628 "data_offset": 2048, 00:12:55.628 "data_size": 63488 00:12:55.628 } 00:12:55.628 ] 00:12:55.628 } 00:12:55.628 } 00:12:55.628 }' 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.628 19:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.628 pt2 00:12:55.628 pt3' 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.628 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 [2024-12-05 19:32:49.242230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=beec85fc-73c5-447f-9a6a-d24db92a2618 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z beec85fc-73c5-447f-9a6a-d24db92a2618 ']' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 [2024-12-05 19:32:49.305904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.887 [2024-12-05 19:32:49.305940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.887 [2024-12-05 19:32:49.306038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.887 [2024-12-05 19:32:49.306155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.887 [2024-12-05 19:32:49.306172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 [2024-12-05 19:32:49.454021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:56.146 [2024-12-05 19:32:49.456511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:56.146 [2024-12-05 19:32:49.456580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:56.146 [2024-12-05 19:32:49.456656] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:56.146 [2024-12-05 19:32:49.456897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:56.146 [2024-12-05 19:32:49.457004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:56.146 [2024-12-05 19:32:49.457167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.146 [2024-12-05 19:32:49.457273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:56.146 request: 00:12:56.146 { 00:12:56.146 "name": "raid_bdev1", 00:12:56.146 "raid_level": "concat", 00:12:56.146 "base_bdevs": [ 00:12:56.146 "malloc1", 00:12:56.146 "malloc2", 00:12:56.146 "malloc3" 00:12:56.146 ], 00:12:56.146 "strip_size_kb": 64, 00:12:56.146 "superblock": false, 00:12:56.146 "method": "bdev_raid_create", 00:12:56.146 "req_id": 1 00:12:56.146 } 00:12:56.146 Got JSON-RPC error response 00:12:56.146 response: 00:12:56.146 { 00:12:56.146 "code": -17, 00:12:56.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:56.146 } 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 [2024-12-05 19:32:49.521958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.146 [2024-12-05 19:32:49.522031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.146 [2024-12-05 19:32:49.522064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:56.146 [2024-12-05 19:32:49.522080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.146 [2024-12-05 19:32:49.525062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.146 [2024-12-05 19:32:49.525139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.146 [2024-12-05 19:32:49.525244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.146 [2024-12-05 19:32:49.525322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.146 pt1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.146 "name": "raid_bdev1", 00:12:56.146 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:56.146 "strip_size_kb": 64, 00:12:56.146 "state": "configuring", 00:12:56.146 "raid_level": "concat", 00:12:56.146 "superblock": true, 00:12:56.146 "num_base_bdevs": 3, 00:12:56.146 "num_base_bdevs_discovered": 1, 00:12:56.146 "num_base_bdevs_operational": 3, 00:12:56.146 "base_bdevs_list": [ 00:12:56.146 { 00:12:56.146 "name": "pt1", 00:12:56.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.146 "is_configured": true, 00:12:56.146 "data_offset": 2048, 00:12:56.146 "data_size": 63488 00:12:56.146 }, 00:12:56.146 { 00:12:56.146 "name": null, 00:12:56.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.146 "is_configured": false, 00:12:56.146 "data_offset": 2048, 00:12:56.146 "data_size": 63488 00:12:56.146 }, 00:12:56.146 { 00:12:56.146 "name": null, 00:12:56.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.146 "is_configured": false, 00:12:56.146 "data_offset": 2048, 00:12:56.146 "data_size": 63488 00:12:56.146 } 00:12:56.146 ] 00:12:56.146 }' 00:12:56.146 19:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.147 19:32:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 [2024-12-05 19:32:50.050139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.713 [2024-12-05 19:32:50.050229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.713 [2024-12-05 19:32:50.050281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:56.713 [2024-12-05 19:32:50.050296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.713 [2024-12-05 19:32:50.050874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.713 [2024-12-05 19:32:50.050906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.713 [2024-12-05 19:32:50.051018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.713 [2024-12-05 19:32:50.051191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.713 pt2 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 [2024-12-05 19:32:50.058106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.713 "name": "raid_bdev1", 00:12:56.713 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:56.713 "strip_size_kb": 64, 00:12:56.713 "state": "configuring", 00:12:56.713 "raid_level": "concat", 00:12:56.713 "superblock": true, 00:12:56.713 "num_base_bdevs": 3, 00:12:56.713 "num_base_bdevs_discovered": 1, 00:12:56.713 "num_base_bdevs_operational": 3, 00:12:56.713 "base_bdevs_list": [ 00:12:56.713 { 00:12:56.713 "name": "pt1", 00:12:56.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.713 "is_configured": true, 00:12:56.713 "data_offset": 2048, 00:12:56.713 "data_size": 63488 00:12:56.713 }, 00:12:56.713 { 00:12:56.713 "name": null, 00:12:56.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.713 "is_configured": false, 00:12:56.713 "data_offset": 0, 00:12:56.713 "data_size": 63488 00:12:56.713 }, 00:12:56.713 { 00:12:56.713 "name": null, 00:12:56.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.713 "is_configured": false, 00:12:56.713 "data_offset": 2048, 00:12:56.713 "data_size": 63488 00:12:56.713 } 00:12:56.713 ] 00:12:56.713 }' 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.713 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.281 [2024-12-05 19:32:50.590239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.281 [2024-12-05 19:32:50.590330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.281 [2024-12-05 19:32:50.590361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:57.281 [2024-12-05 19:32:50.590378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.281 [2024-12-05 19:32:50.590990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.281 [2024-12-05 19:32:50.591022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.281 [2024-12-05 19:32:50.591127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.281 [2024-12-05 19:32:50.591165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.281 pt2 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.281 [2024-12-05 19:32:50.598206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.281 [2024-12-05 19:32:50.598263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.281 [2024-12-05 19:32:50.598286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:57.281 [2024-12-05 19:32:50.598302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.281 [2024-12-05 19:32:50.598780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.281 [2024-12-05 19:32:50.598821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.281 [2024-12-05 19:32:50.598895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.281 [2024-12-05 19:32:50.598929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.281 [2024-12-05 19:32:50.599076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.281 [2024-12-05 19:32:50.599097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:57.281 [2024-12-05 19:32:50.599411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:57.281 [2024-12-05 19:32:50.599613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.281 [2024-12-05 19:32:50.599629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:57.281 [2024-12-05 19:32:50.599830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.281 pt3 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.281 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.282 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.282 "name": "raid_bdev1", 00:12:57.282 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:57.282 "strip_size_kb": 64, 00:12:57.282 "state": "online", 00:12:57.282 "raid_level": "concat", 00:12:57.282 "superblock": true, 00:12:57.282 "num_base_bdevs": 3, 00:12:57.282 "num_base_bdevs_discovered": 3, 00:12:57.282 "num_base_bdevs_operational": 3, 00:12:57.282 "base_bdevs_list": [ 00:12:57.282 { 00:12:57.282 "name": "pt1", 00:12:57.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.282 "is_configured": true, 00:12:57.282 "data_offset": 2048, 00:12:57.282 "data_size": 63488 00:12:57.282 }, 00:12:57.282 { 00:12:57.282 "name": "pt2", 00:12:57.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.282 "is_configured": true, 00:12:57.282 "data_offset": 2048, 00:12:57.282 "data_size": 63488 00:12:57.282 }, 00:12:57.282 { 00:12:57.282 "name": "pt3", 00:12:57.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.282 "is_configured": true, 00:12:57.282 "data_offset": 2048, 00:12:57.282 "data_size": 63488 00:12:57.282 } 00:12:57.282 ] 00:12:57.282 }' 00:12:57.282 19:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.282 19:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.847 [2024-12-05 19:32:51.106804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.847 "name": "raid_bdev1", 00:12:57.847 "aliases": [ 00:12:57.847 "beec85fc-73c5-447f-9a6a-d24db92a2618" 00:12:57.847 ], 00:12:57.847 "product_name": "Raid Volume", 00:12:57.847 "block_size": 512, 00:12:57.847 "num_blocks": 190464, 00:12:57.847 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:57.847 "assigned_rate_limits": { 00:12:57.847 "rw_ios_per_sec": 0, 00:12:57.847 "rw_mbytes_per_sec": 0, 00:12:57.847 "r_mbytes_per_sec": 0, 00:12:57.847 "w_mbytes_per_sec": 0 00:12:57.847 }, 00:12:57.847 "claimed": false, 00:12:57.847 "zoned": false, 00:12:57.847 "supported_io_types": { 00:12:57.847 "read": true, 00:12:57.847 "write": true, 00:12:57.847 "unmap": true, 00:12:57.847 "flush": true, 00:12:57.847 "reset": true, 00:12:57.847 "nvme_admin": false, 00:12:57.847 "nvme_io": false, 00:12:57.847 "nvme_io_md": false, 00:12:57.847 "write_zeroes": true, 00:12:57.847 "zcopy": false, 00:12:57.847 "get_zone_info": false, 00:12:57.847 "zone_management": false, 00:12:57.847 "zone_append": false, 00:12:57.847 "compare": false, 00:12:57.847 "compare_and_write": false, 00:12:57.847 "abort": false, 00:12:57.847 "seek_hole": false, 00:12:57.847 "seek_data": false, 00:12:57.847 "copy": false, 00:12:57.847 "nvme_iov_md": false 00:12:57.847 }, 00:12:57.847 "memory_domains": [ 00:12:57.847 { 00:12:57.847 "dma_device_id": "system", 00:12:57.847 "dma_device_type": 1 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.847 "dma_device_type": 2 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "dma_device_id": "system", 00:12:57.847 "dma_device_type": 1 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.847 "dma_device_type": 2 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "dma_device_id": "system", 00:12:57.847 "dma_device_type": 1 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.847 "dma_device_type": 2 00:12:57.847 } 00:12:57.847 ], 00:12:57.847 "driver_specific": { 00:12:57.847 "raid": { 00:12:57.847 "uuid": "beec85fc-73c5-447f-9a6a-d24db92a2618", 00:12:57.847 "strip_size_kb": 64, 00:12:57.847 "state": "online", 00:12:57.847 "raid_level": "concat", 00:12:57.847 "superblock": true, 00:12:57.847 "num_base_bdevs": 3, 00:12:57.847 "num_base_bdevs_discovered": 3, 00:12:57.847 "num_base_bdevs_operational": 3, 00:12:57.847 "base_bdevs_list": [ 00:12:57.847 { 00:12:57.847 "name": "pt1", 00:12:57.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.847 "is_configured": true, 00:12:57.847 "data_offset": 2048, 00:12:57.847 "data_size": 63488 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "name": "pt2", 00:12:57.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.847 "is_configured": true, 00:12:57.847 "data_offset": 2048, 00:12:57.847 "data_size": 63488 00:12:57.847 }, 00:12:57.847 { 00:12:57.847 "name": "pt3", 00:12:57.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.847 "is_configured": true, 00:12:57.847 "data_offset": 2048, 00:12:57.847 "data_size": 63488 00:12:57.847 } 00:12:57.847 ] 00:12:57.847 } 00:12:57.847 } 00:12:57.847 }' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.847 pt2 00:12:57.847 pt3' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.847 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:58.106 [2024-12-05 19:32:51.426826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' beec85fc-73c5-447f-9a6a-d24db92a2618 '!=' beec85fc-73c5-447f-9a6a-d24db92a2618 ']' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66883 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66883 ']' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66883 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66883 00:12:58.106 killing process with pid 66883 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66883' 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66883 00:12:58.106 [2024-12-05 19:32:51.502903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.106 19:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66883 00:12:58.106 [2024-12-05 19:32:51.503024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.106 [2024-12-05 19:32:51.503106] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.106 [2024-12-05 19:32:51.503126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:58.365 [2024-12-05 19:32:51.770896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.739 19:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:59.739 00:12:59.739 real 0m5.748s 00:12:59.739 user 0m8.693s 00:12:59.739 sys 0m0.825s 00:12:59.739 ************************************ 00:12:59.739 END TEST raid_superblock_test 00:12:59.739 ************************************ 00:12:59.739 19:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.739 19:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.739 19:32:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:59.739 19:32:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.739 19:32:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.739 19:32:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.739 ************************************ 00:12:59.739 START TEST raid_read_error_test 00:12:59.739 ************************************ 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZVC9p7t1Ni 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67147 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67147 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67147 ']' 00:12:59.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.739 19:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.739 [2024-12-05 19:32:52.977228] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:12:59.739 [2024-12-05 19:32:52.977396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67147 ] 00:12:59.739 [2024-12-05 19:32:53.151404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.997 [2024-12-05 19:32:53.284971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.255 [2024-12-05 19:32:53.488452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.255 [2024-12-05 19:32:53.488725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.876 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.876 BaseBdev1_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 true 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 [2024-12-05 19:32:54.058597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.877 [2024-12-05 19:32:54.058669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.877 [2024-12-05 19:32:54.058726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:00.877 [2024-12-05 19:32:54.058748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.877 [2024-12-05 19:32:54.061570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.877 [2024-12-05 19:32:54.061625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.877 BaseBdev1 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 BaseBdev2_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 true 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 [2024-12-05 19:32:54.116123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:00.877 [2024-12-05 19:32:54.116333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.877 [2024-12-05 19:32:54.116370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:00.877 [2024-12-05 19:32:54.116391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.877 [2024-12-05 19:32:54.119323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.877 [2024-12-05 19:32:54.119568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.877 BaseBdev2 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 BaseBdev3_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 true 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 [2024-12-05 19:32:54.181023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:00.877 [2024-12-05 19:32:54.181125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.877 [2024-12-05 19:32:54.181155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:00.877 [2024-12-05 19:32:54.181173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.877 [2024-12-05 19:32:54.184234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.877 [2024-12-05 19:32:54.184424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:00.877 BaseBdev3 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 [2024-12-05 19:32:54.189161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.877 [2024-12-05 19:32:54.191627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.877 [2024-12-05 19:32:54.191780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.877 [2024-12-05 19:32:54.192083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:00.877 [2024-12-05 19:32:54.192103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:00.877 [2024-12-05 19:32:54.192409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:00.877 [2024-12-05 19:32:54.192608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:00.877 [2024-12-05 19:32:54.192630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:00.877 [2024-12-05 19:32:54.192816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.877 "name": "raid_bdev1", 00:13:00.877 "uuid": "256e14b6-835a-4953-bc21-cdce32e5b78f", 00:13:00.877 "strip_size_kb": 64, 00:13:00.877 "state": "online", 00:13:00.877 "raid_level": "concat", 00:13:00.877 "superblock": true, 00:13:00.877 "num_base_bdevs": 3, 00:13:00.877 "num_base_bdevs_discovered": 3, 00:13:00.877 "num_base_bdevs_operational": 3, 00:13:00.877 "base_bdevs_list": [ 00:13:00.877 { 00:13:00.877 "name": "BaseBdev1", 00:13:00.877 "uuid": "3854e006-bd88-5c0a-ba03-c3386b52f9e2", 00:13:00.877 "is_configured": true, 00:13:00.877 "data_offset": 2048, 00:13:00.877 "data_size": 63488 00:13:00.877 }, 00:13:00.877 { 00:13:00.877 "name": "BaseBdev2", 00:13:00.877 "uuid": "1f01eae3-e926-5361-9299-4fb86d719268", 00:13:00.877 "is_configured": true, 00:13:00.877 "data_offset": 2048, 00:13:00.877 "data_size": 63488 00:13:00.877 }, 00:13:00.877 { 00:13:00.877 "name": "BaseBdev3", 00:13:00.877 "uuid": "3798c9d6-2646-57f8-8740-4f4ff9dd81fe", 00:13:00.877 "is_configured": true, 00:13:00.877 "data_offset": 2048, 00:13:00.877 "data_size": 63488 00:13:00.877 } 00:13:00.877 ] 00:13:00.877 }' 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.877 19:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.461 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:01.461 19:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.461 [2024-12-05 19:32:54.814797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.396 "name": "raid_bdev1", 00:13:02.396 "uuid": "256e14b6-835a-4953-bc21-cdce32e5b78f", 00:13:02.396 "strip_size_kb": 64, 00:13:02.396 "state": "online", 00:13:02.396 "raid_level": "concat", 00:13:02.396 "superblock": true, 00:13:02.396 "num_base_bdevs": 3, 00:13:02.396 "num_base_bdevs_discovered": 3, 00:13:02.396 "num_base_bdevs_operational": 3, 00:13:02.396 "base_bdevs_list": [ 00:13:02.396 { 00:13:02.396 "name": "BaseBdev1", 00:13:02.396 "uuid": "3854e006-bd88-5c0a-ba03-c3386b52f9e2", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 }, 00:13:02.396 { 00:13:02.396 "name": "BaseBdev2", 00:13:02.396 "uuid": "1f01eae3-e926-5361-9299-4fb86d719268", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 }, 00:13:02.396 { 00:13:02.396 "name": "BaseBdev3", 00:13:02.396 "uuid": "3798c9d6-2646-57f8-8740-4f4ff9dd81fe", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 } 00:13:02.396 ] 00:13:02.396 }' 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.396 19:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.964 [2024-12-05 19:32:56.254760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.964 [2024-12-05 19:32:56.254941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.964 [2024-12-05 19:32:56.258556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.964 [2024-12-05 19:32:56.258611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.964 [2024-12-05 19:32:56.258661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.964 [2024-12-05 19:32:56.258678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:02.964 { 00:13:02.964 "results": [ 00:13:02.964 { 00:13:02.964 "job": "raid_bdev1", 00:13:02.964 "core_mask": "0x1", 00:13:02.964 "workload": "randrw", 00:13:02.964 "percentage": 50, 00:13:02.964 "status": "finished", 00:13:02.964 "queue_depth": 1, 00:13:02.964 "io_size": 131072, 00:13:02.964 "runtime": 1.437895, 00:13:02.964 "iops": 10587.699379996453, 00:13:02.964 "mibps": 1323.4624224995566, 00:13:02.964 "io_failed": 1, 00:13:02.964 "io_timeout": 0, 00:13:02.964 "avg_latency_us": 131.36977626511418, 00:13:02.964 "min_latency_us": 38.167272727272724, 00:13:02.964 "max_latency_us": 1921.3963636363637 00:13:02.964 } 00:13:02.964 ], 00:13:02.964 "core_count": 1 00:13:02.964 } 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67147 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67147 ']' 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67147 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67147 00:13:02.964 killing process with pid 67147 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67147' 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67147 00:13:02.964 [2024-12-05 19:32:56.294904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.964 19:32:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67147 00:13:03.223 [2024-12-05 19:32:56.501882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZVC9p7t1Ni 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:04.601 00:13:04.601 real 0m4.735s 00:13:04.601 user 0m5.904s 00:13:04.601 sys 0m0.569s 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.601 19:32:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.601 ************************************ 00:13:04.601 END TEST raid_read_error_test 00:13:04.601 ************************************ 00:13:04.601 19:32:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:04.601 19:32:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:04.601 19:32:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.601 19:32:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.601 ************************************ 00:13:04.601 START TEST raid_write_error_test 00:13:04.601 ************************************ 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:04.601 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kQJB2ylWoI 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67293 00:13:04.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67293 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67293 ']' 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.602 19:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.602 [2024-12-05 19:32:57.783477] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:04.602 [2024-12-05 19:32:57.783675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67293 ] 00:13:04.602 [2024-12-05 19:32:57.970406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.861 [2024-12-05 19:32:58.103996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.119 [2024-12-05 19:32:58.309335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.119 [2024-12-05 19:32:58.309413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.379 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 BaseBdev1_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 true 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 [2024-12-05 19:32:58.831309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:05.639 [2024-12-05 19:32:58.831382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.639 [2024-12-05 19:32:58.831412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:05.639 [2024-12-05 19:32:58.831441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.639 [2024-12-05 19:32:58.834268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.639 [2024-12-05 19:32:58.834322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.639 BaseBdev1 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 BaseBdev2_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 true 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 [2024-12-05 19:32:58.892907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.639 [2024-12-05 19:32:58.892992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.639 [2024-12-05 19:32:58.893033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.639 [2024-12-05 19:32:58.893050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.639 [2024-12-05 19:32:58.896095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.639 [2024-12-05 19:32:58.896305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.639 BaseBdev2 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 BaseBdev3_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 true 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.639 [2024-12-05 19:32:58.969346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:05.639 [2024-12-05 19:32:58.969430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.639 [2024-12-05 19:32:58.969459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:05.639 [2024-12-05 19:32:58.969492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.639 [2024-12-05 19:32:58.972342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.639 [2024-12-05 19:32:58.972519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:05.639 BaseBdev3 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.639 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.640 [2024-12-05 19:32:58.977500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.640 [2024-12-05 19:32:58.980150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.640 [2024-12-05 19:32:58.980378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.640 [2024-12-05 19:32:58.980726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:05.640 [2024-12-05 19:32:58.980857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:05.640 [2024-12-05 19:32:58.981221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:05.640 [2024-12-05 19:32:58.981556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:05.640 [2024-12-05 19:32:58.981694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:05.640 [2024-12-05 19:32:58.982084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.640 19:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.640 19:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.640 19:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.640 "name": "raid_bdev1", 00:13:05.640 "uuid": "8ac028b6-d048-45d4-8ddf-083e2d81a6cc", 00:13:05.640 "strip_size_kb": 64, 00:13:05.640 "state": "online", 00:13:05.640 "raid_level": "concat", 00:13:05.640 "superblock": true, 00:13:05.640 "num_base_bdevs": 3, 00:13:05.640 "num_base_bdevs_discovered": 3, 00:13:05.640 "num_base_bdevs_operational": 3, 00:13:05.640 "base_bdevs_list": [ 00:13:05.640 { 00:13:05.640 "name": "BaseBdev1", 00:13:05.640 "uuid": "cfe24ac4-871a-507c-a8c2-a84740333f96", 00:13:05.640 "is_configured": true, 00:13:05.640 "data_offset": 2048, 00:13:05.640 "data_size": 63488 00:13:05.640 }, 00:13:05.640 { 00:13:05.640 "name": "BaseBdev2", 00:13:05.640 "uuid": "debb6ee0-f791-5435-9c1a-f1aa1571c57d", 00:13:05.640 "is_configured": true, 00:13:05.640 "data_offset": 2048, 00:13:05.640 "data_size": 63488 00:13:05.640 }, 00:13:05.640 { 00:13:05.640 "name": "BaseBdev3", 00:13:05.640 "uuid": "ac1bb3ad-5eae-5216-8acd-a0b961985532", 00:13:05.640 "is_configured": true, 00:13:05.640 "data_offset": 2048, 00:13:05.640 "data_size": 63488 00:13:05.640 } 00:13:05.640 ] 00:13:05.640 }' 00:13:05.640 19:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.640 19:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.208 19:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:06.208 19:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:06.208 [2024-12-05 19:32:59.615591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:07.143 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:07.143 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.143 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.144 "name": "raid_bdev1", 00:13:07.144 "uuid": "8ac028b6-d048-45d4-8ddf-083e2d81a6cc", 00:13:07.144 "strip_size_kb": 64, 00:13:07.144 "state": "online", 00:13:07.144 "raid_level": "concat", 00:13:07.144 "superblock": true, 00:13:07.144 "num_base_bdevs": 3, 00:13:07.144 "num_base_bdevs_discovered": 3, 00:13:07.144 "num_base_bdevs_operational": 3, 00:13:07.144 "base_bdevs_list": [ 00:13:07.144 { 00:13:07.144 "name": "BaseBdev1", 00:13:07.144 "uuid": "cfe24ac4-871a-507c-a8c2-a84740333f96", 00:13:07.144 "is_configured": true, 00:13:07.144 "data_offset": 2048, 00:13:07.144 "data_size": 63488 00:13:07.144 }, 00:13:07.144 { 00:13:07.144 "name": "BaseBdev2", 00:13:07.144 "uuid": "debb6ee0-f791-5435-9c1a-f1aa1571c57d", 00:13:07.144 "is_configured": true, 00:13:07.144 "data_offset": 2048, 00:13:07.144 "data_size": 63488 00:13:07.144 }, 00:13:07.144 { 00:13:07.144 "name": "BaseBdev3", 00:13:07.144 "uuid": "ac1bb3ad-5eae-5216-8acd-a0b961985532", 00:13:07.144 "is_configured": true, 00:13:07.144 "data_offset": 2048, 00:13:07.144 "data_size": 63488 00:13:07.144 } 00:13:07.144 ] 00:13:07.144 }' 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.144 19:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.712 [2024-12-05 19:33:01.031795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.712 [2024-12-05 19:33:01.031966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.712 [2024-12-05 19:33:01.035519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.712 { 00:13:07.712 "results": [ 00:13:07.712 { 00:13:07.712 "job": "raid_bdev1", 00:13:07.712 "core_mask": "0x1", 00:13:07.712 "workload": "randrw", 00:13:07.712 "percentage": 50, 00:13:07.712 "status": "finished", 00:13:07.712 "queue_depth": 1, 00:13:07.712 "io_size": 131072, 00:13:07.712 "runtime": 1.413772, 00:13:07.712 "iops": 10495.327393667438, 00:13:07.712 "mibps": 1311.9159242084297, 00:13:07.712 "io_failed": 1, 00:13:07.712 "io_timeout": 0, 00:13:07.712 "avg_latency_us": 132.6480603324164, 00:13:07.712 "min_latency_us": 38.167272727272724, 00:13:07.712 "max_latency_us": 1854.370909090909 00:13:07.712 } 00:13:07.712 ], 00:13:07.712 "core_count": 1 00:13:07.712 } 00:13:07.712 [2024-12-05 19:33:01.035759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.712 [2024-12-05 19:33:01.035830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.712 [2024-12-05 19:33:01.035850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67293 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67293 ']' 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67293 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67293 00:13:07.712 killing process with pid 67293 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67293' 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67293 00:13:07.712 [2024-12-05 19:33:01.070581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.712 19:33:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67293 00:13:07.971 [2024-12-05 19:33:01.278097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kQJB2ylWoI 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:09.348 00:13:09.348 real 0m4.710s 00:13:09.348 user 0m5.830s 00:13:09.348 sys 0m0.607s 00:13:09.348 ************************************ 00:13:09.348 END TEST raid_write_error_test 00:13:09.348 ************************************ 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.348 19:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.348 19:33:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:09.348 19:33:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:09.348 19:33:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.348 19:33:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.348 19:33:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.348 ************************************ 00:13:09.348 START TEST raid_state_function_test 00:13:09.348 ************************************ 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:09.348 Process raid pid: 67431 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67431 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67431' 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67431 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67431 ']' 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.348 19:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.348 [2024-12-05 19:33:02.550606] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:09.348 [2024-12-05 19:33:02.550997] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.348 [2024-12-05 19:33:02.743692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.607 [2024-12-05 19:33:02.902848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.865 [2024-12-05 19:33:03.116891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.865 [2024-12-05 19:33:03.117175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.124 [2024-12-05 19:33:03.544364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.124 [2024-12-05 19:33:03.544446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.124 [2024-12-05 19:33:03.544464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.124 [2024-12-05 19:33:03.544480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.124 [2024-12-05 19:33:03.544490] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.124 [2024-12-05 19:33:03.544503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.124 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.383 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.383 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.383 "name": "Existed_Raid", 00:13:10.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.383 "strip_size_kb": 0, 00:13:10.383 "state": "configuring", 00:13:10.383 "raid_level": "raid1", 00:13:10.383 "superblock": false, 00:13:10.383 "num_base_bdevs": 3, 00:13:10.383 "num_base_bdevs_discovered": 0, 00:13:10.383 "num_base_bdevs_operational": 3, 00:13:10.383 "base_bdevs_list": [ 00:13:10.383 { 00:13:10.383 "name": "BaseBdev1", 00:13:10.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.383 "is_configured": false, 00:13:10.383 "data_offset": 0, 00:13:10.383 "data_size": 0 00:13:10.383 }, 00:13:10.383 { 00:13:10.383 "name": "BaseBdev2", 00:13:10.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.383 "is_configured": false, 00:13:10.383 "data_offset": 0, 00:13:10.383 "data_size": 0 00:13:10.383 }, 00:13:10.383 { 00:13:10.383 "name": "BaseBdev3", 00:13:10.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.383 "is_configured": false, 00:13:10.383 "data_offset": 0, 00:13:10.383 "data_size": 0 00:13:10.383 } 00:13:10.383 ] 00:13:10.383 }' 00:13:10.383 19:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.383 19:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-05 19:33:04.096506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.949 [2024-12-05 19:33:04.096551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-05 19:33:04.108486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.949 [2024-12-05 19:33:04.108674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.949 [2024-12-05 19:33:04.108847] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.949 [2024-12-05 19:33:04.108999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.949 [2024-12-05 19:33:04.109108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.949 [2024-12-05 19:33:04.109234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.949 [2024-12-05 19:33:04.158056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.949 BaseBdev1 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:10.949 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 [ 00:13:10.950 { 00:13:10.950 "name": "BaseBdev1", 00:13:10.950 "aliases": [ 00:13:10.950 "5758e9f4-dd42-46d5-9469-f33c5aa10d1b" 00:13:10.950 ], 00:13:10.950 "product_name": "Malloc disk", 00:13:10.950 "block_size": 512, 00:13:10.950 "num_blocks": 65536, 00:13:10.950 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:10.950 "assigned_rate_limits": { 00:13:10.950 "rw_ios_per_sec": 0, 00:13:10.950 "rw_mbytes_per_sec": 0, 00:13:10.950 "r_mbytes_per_sec": 0, 00:13:10.950 "w_mbytes_per_sec": 0 00:13:10.950 }, 00:13:10.950 "claimed": true, 00:13:10.950 "claim_type": "exclusive_write", 00:13:10.950 "zoned": false, 00:13:10.950 "supported_io_types": { 00:13:10.950 "read": true, 00:13:10.950 "write": true, 00:13:10.950 "unmap": true, 00:13:10.950 "flush": true, 00:13:10.950 "reset": true, 00:13:10.950 "nvme_admin": false, 00:13:10.950 "nvme_io": false, 00:13:10.950 "nvme_io_md": false, 00:13:10.950 "write_zeroes": true, 00:13:10.950 "zcopy": true, 00:13:10.950 "get_zone_info": false, 00:13:10.950 "zone_management": false, 00:13:10.950 "zone_append": false, 00:13:10.950 "compare": false, 00:13:10.950 "compare_and_write": false, 00:13:10.950 "abort": true, 00:13:10.950 "seek_hole": false, 00:13:10.950 "seek_data": false, 00:13:10.950 "copy": true, 00:13:10.950 "nvme_iov_md": false 00:13:10.950 }, 00:13:10.950 "memory_domains": [ 00:13:10.950 { 00:13:10.950 "dma_device_id": "system", 00:13:10.950 "dma_device_type": 1 00:13:10.950 }, 00:13:10.950 { 00:13:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.950 "dma_device_type": 2 00:13:10.950 } 00:13:10.950 ], 00:13:10.950 "driver_specific": {} 00:13:10.950 } 00:13:10.950 ] 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.950 "name": "Existed_Raid", 00:13:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.950 "strip_size_kb": 0, 00:13:10.950 "state": "configuring", 00:13:10.950 "raid_level": "raid1", 00:13:10.950 "superblock": false, 00:13:10.950 "num_base_bdevs": 3, 00:13:10.950 "num_base_bdevs_discovered": 1, 00:13:10.950 "num_base_bdevs_operational": 3, 00:13:10.950 "base_bdevs_list": [ 00:13:10.950 { 00:13:10.950 "name": "BaseBdev1", 00:13:10.950 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:10.950 "is_configured": true, 00:13:10.950 "data_offset": 0, 00:13:10.950 "data_size": 65536 00:13:10.950 }, 00:13:10.950 { 00:13:10.950 "name": "BaseBdev2", 00:13:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.950 "is_configured": false, 00:13:10.950 "data_offset": 0, 00:13:10.950 "data_size": 0 00:13:10.950 }, 00:13:10.950 { 00:13:10.950 "name": "BaseBdev3", 00:13:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.950 "is_configured": false, 00:13:10.950 "data_offset": 0, 00:13:10.950 "data_size": 0 00:13:10.950 } 00:13:10.950 ] 00:13:10.950 }' 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.950 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.517 [2024-12-05 19:33:04.718323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.517 [2024-12-05 19:33:04.718386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.517 [2024-12-05 19:33:04.726356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.517 [2024-12-05 19:33:04.728794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:11.517 [2024-12-05 19:33:04.728848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:11.517 [2024-12-05 19:33:04.728867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:11.517 [2024-12-05 19:33:04.728882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.517 "name": "Existed_Raid", 00:13:11.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.517 "strip_size_kb": 0, 00:13:11.517 "state": "configuring", 00:13:11.517 "raid_level": "raid1", 00:13:11.517 "superblock": false, 00:13:11.517 "num_base_bdevs": 3, 00:13:11.517 "num_base_bdevs_discovered": 1, 00:13:11.517 "num_base_bdevs_operational": 3, 00:13:11.517 "base_bdevs_list": [ 00:13:11.517 { 00:13:11.517 "name": "BaseBdev1", 00:13:11.517 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:11.517 "is_configured": true, 00:13:11.517 "data_offset": 0, 00:13:11.517 "data_size": 65536 00:13:11.517 }, 00:13:11.517 { 00:13:11.517 "name": "BaseBdev2", 00:13:11.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.517 "is_configured": false, 00:13:11.517 "data_offset": 0, 00:13:11.517 "data_size": 0 00:13:11.517 }, 00:13:11.517 { 00:13:11.517 "name": "BaseBdev3", 00:13:11.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.517 "is_configured": false, 00:13:11.517 "data_offset": 0, 00:13:11.517 "data_size": 0 00:13:11.517 } 00:13:11.517 ] 00:13:11.517 }' 00:13:11.517 19:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.518 19:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.099 [2024-12-05 19:33:05.314187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.099 BaseBdev2 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.099 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.099 [ 00:13:12.099 { 00:13:12.099 "name": "BaseBdev2", 00:13:12.099 "aliases": [ 00:13:12.099 "19bbaff6-09f2-41be-b331-0ed084909c41" 00:13:12.099 ], 00:13:12.099 "product_name": "Malloc disk", 00:13:12.099 "block_size": 512, 00:13:12.099 "num_blocks": 65536, 00:13:12.099 "uuid": "19bbaff6-09f2-41be-b331-0ed084909c41", 00:13:12.099 "assigned_rate_limits": { 00:13:12.099 "rw_ios_per_sec": 0, 00:13:12.099 "rw_mbytes_per_sec": 0, 00:13:12.099 "r_mbytes_per_sec": 0, 00:13:12.099 "w_mbytes_per_sec": 0 00:13:12.099 }, 00:13:12.099 "claimed": true, 00:13:12.099 "claim_type": "exclusive_write", 00:13:12.099 "zoned": false, 00:13:12.099 "supported_io_types": { 00:13:12.099 "read": true, 00:13:12.099 "write": true, 00:13:12.099 "unmap": true, 00:13:12.099 "flush": true, 00:13:12.099 "reset": true, 00:13:12.099 "nvme_admin": false, 00:13:12.099 "nvme_io": false, 00:13:12.099 "nvme_io_md": false, 00:13:12.099 "write_zeroes": true, 00:13:12.099 "zcopy": true, 00:13:12.099 "get_zone_info": false, 00:13:12.099 "zone_management": false, 00:13:12.099 "zone_append": false, 00:13:12.099 "compare": false, 00:13:12.099 "compare_and_write": false, 00:13:12.099 "abort": true, 00:13:12.099 "seek_hole": false, 00:13:12.099 "seek_data": false, 00:13:12.099 "copy": true, 00:13:12.100 "nvme_iov_md": false 00:13:12.100 }, 00:13:12.100 "memory_domains": [ 00:13:12.100 { 00:13:12.100 "dma_device_id": "system", 00:13:12.100 "dma_device_type": 1 00:13:12.100 }, 00:13:12.100 { 00:13:12.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.100 "dma_device_type": 2 00:13:12.100 } 00:13:12.100 ], 00:13:12.100 "driver_specific": {} 00:13:12.100 } 00:13:12.100 ] 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.100 "name": "Existed_Raid", 00:13:12.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.100 "strip_size_kb": 0, 00:13:12.100 "state": "configuring", 00:13:12.100 "raid_level": "raid1", 00:13:12.100 "superblock": false, 00:13:12.100 "num_base_bdevs": 3, 00:13:12.100 "num_base_bdevs_discovered": 2, 00:13:12.100 "num_base_bdevs_operational": 3, 00:13:12.100 "base_bdevs_list": [ 00:13:12.100 { 00:13:12.100 "name": "BaseBdev1", 00:13:12.100 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:12.100 "is_configured": true, 00:13:12.100 "data_offset": 0, 00:13:12.100 "data_size": 65536 00:13:12.100 }, 00:13:12.100 { 00:13:12.100 "name": "BaseBdev2", 00:13:12.100 "uuid": "19bbaff6-09f2-41be-b331-0ed084909c41", 00:13:12.100 "is_configured": true, 00:13:12.100 "data_offset": 0, 00:13:12.100 "data_size": 65536 00:13:12.100 }, 00:13:12.100 { 00:13:12.100 "name": "BaseBdev3", 00:13:12.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.100 "is_configured": false, 00:13:12.100 "data_offset": 0, 00:13:12.100 "data_size": 0 00:13:12.100 } 00:13:12.100 ] 00:13:12.100 }' 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.100 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.667 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:12.667 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.667 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.667 [2024-12-05 19:33:05.927106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.667 [2024-12-05 19:33:05.927168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.667 [2024-12-05 19:33:05.927188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:12.667 [2024-12-05 19:33:05.927526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:12.667 [2024-12-05 19:33:05.927804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.668 [2024-12-05 19:33:05.927822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:12.668 [2024-12-05 19:33:05.928149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.668 BaseBdev3 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 [ 00:13:12.668 { 00:13:12.668 "name": "BaseBdev3", 00:13:12.668 "aliases": [ 00:13:12.668 "3debeea8-ceb4-436c-a376-71f925a6bb13" 00:13:12.668 ], 00:13:12.668 "product_name": "Malloc disk", 00:13:12.668 "block_size": 512, 00:13:12.668 "num_blocks": 65536, 00:13:12.668 "uuid": "3debeea8-ceb4-436c-a376-71f925a6bb13", 00:13:12.668 "assigned_rate_limits": { 00:13:12.668 "rw_ios_per_sec": 0, 00:13:12.668 "rw_mbytes_per_sec": 0, 00:13:12.668 "r_mbytes_per_sec": 0, 00:13:12.668 "w_mbytes_per_sec": 0 00:13:12.668 }, 00:13:12.668 "claimed": true, 00:13:12.668 "claim_type": "exclusive_write", 00:13:12.668 "zoned": false, 00:13:12.668 "supported_io_types": { 00:13:12.668 "read": true, 00:13:12.668 "write": true, 00:13:12.668 "unmap": true, 00:13:12.668 "flush": true, 00:13:12.668 "reset": true, 00:13:12.668 "nvme_admin": false, 00:13:12.668 "nvme_io": false, 00:13:12.668 "nvme_io_md": false, 00:13:12.668 "write_zeroes": true, 00:13:12.668 "zcopy": true, 00:13:12.668 "get_zone_info": false, 00:13:12.668 "zone_management": false, 00:13:12.668 "zone_append": false, 00:13:12.668 "compare": false, 00:13:12.668 "compare_and_write": false, 00:13:12.668 "abort": true, 00:13:12.668 "seek_hole": false, 00:13:12.668 "seek_data": false, 00:13:12.668 "copy": true, 00:13:12.668 "nvme_iov_md": false 00:13:12.668 }, 00:13:12.668 "memory_domains": [ 00:13:12.668 { 00:13:12.668 "dma_device_id": "system", 00:13:12.668 "dma_device_type": 1 00:13:12.668 }, 00:13:12.668 { 00:13:12.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.668 "dma_device_type": 2 00:13:12.668 } 00:13:12.668 ], 00:13:12.668 "driver_specific": {} 00:13:12.668 } 00:13:12.668 ] 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.668 19:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.668 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.668 "name": "Existed_Raid", 00:13:12.668 "uuid": "7ad529fe-7374-46ad-98c8-48661b0c4b89", 00:13:12.668 "strip_size_kb": 0, 00:13:12.668 "state": "online", 00:13:12.668 "raid_level": "raid1", 00:13:12.668 "superblock": false, 00:13:12.668 "num_base_bdevs": 3, 00:13:12.668 "num_base_bdevs_discovered": 3, 00:13:12.668 "num_base_bdevs_operational": 3, 00:13:12.668 "base_bdevs_list": [ 00:13:12.668 { 00:13:12.668 "name": "BaseBdev1", 00:13:12.668 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:12.668 "is_configured": true, 00:13:12.668 "data_offset": 0, 00:13:12.668 "data_size": 65536 00:13:12.668 }, 00:13:12.668 { 00:13:12.668 "name": "BaseBdev2", 00:13:12.668 "uuid": "19bbaff6-09f2-41be-b331-0ed084909c41", 00:13:12.668 "is_configured": true, 00:13:12.668 "data_offset": 0, 00:13:12.668 "data_size": 65536 00:13:12.668 }, 00:13:12.668 { 00:13:12.668 "name": "BaseBdev3", 00:13:12.668 "uuid": "3debeea8-ceb4-436c-a376-71f925a6bb13", 00:13:12.668 "is_configured": true, 00:13:12.668 "data_offset": 0, 00:13:12.668 "data_size": 65536 00:13:12.668 } 00:13:12.668 ] 00:13:12.668 }' 00:13:12.668 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.668 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.235 [2024-12-05 19:33:06.487852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.235 "name": "Existed_Raid", 00:13:13.235 "aliases": [ 00:13:13.235 "7ad529fe-7374-46ad-98c8-48661b0c4b89" 00:13:13.235 ], 00:13:13.235 "product_name": "Raid Volume", 00:13:13.235 "block_size": 512, 00:13:13.235 "num_blocks": 65536, 00:13:13.235 "uuid": "7ad529fe-7374-46ad-98c8-48661b0c4b89", 00:13:13.235 "assigned_rate_limits": { 00:13:13.235 "rw_ios_per_sec": 0, 00:13:13.235 "rw_mbytes_per_sec": 0, 00:13:13.235 "r_mbytes_per_sec": 0, 00:13:13.235 "w_mbytes_per_sec": 0 00:13:13.235 }, 00:13:13.235 "claimed": false, 00:13:13.235 "zoned": false, 00:13:13.235 "supported_io_types": { 00:13:13.235 "read": true, 00:13:13.235 "write": true, 00:13:13.235 "unmap": false, 00:13:13.235 "flush": false, 00:13:13.235 "reset": true, 00:13:13.235 "nvme_admin": false, 00:13:13.235 "nvme_io": false, 00:13:13.235 "nvme_io_md": false, 00:13:13.235 "write_zeroes": true, 00:13:13.235 "zcopy": false, 00:13:13.235 "get_zone_info": false, 00:13:13.235 "zone_management": false, 00:13:13.235 "zone_append": false, 00:13:13.235 "compare": false, 00:13:13.235 "compare_and_write": false, 00:13:13.235 "abort": false, 00:13:13.235 "seek_hole": false, 00:13:13.235 "seek_data": false, 00:13:13.235 "copy": false, 00:13:13.235 "nvme_iov_md": false 00:13:13.235 }, 00:13:13.235 "memory_domains": [ 00:13:13.235 { 00:13:13.235 "dma_device_id": "system", 00:13:13.235 "dma_device_type": 1 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.235 "dma_device_type": 2 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "dma_device_id": "system", 00:13:13.235 "dma_device_type": 1 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.235 "dma_device_type": 2 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "dma_device_id": "system", 00:13:13.235 "dma_device_type": 1 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.235 "dma_device_type": 2 00:13:13.235 } 00:13:13.235 ], 00:13:13.235 "driver_specific": { 00:13:13.235 "raid": { 00:13:13.235 "uuid": "7ad529fe-7374-46ad-98c8-48661b0c4b89", 00:13:13.235 "strip_size_kb": 0, 00:13:13.235 "state": "online", 00:13:13.235 "raid_level": "raid1", 00:13:13.235 "superblock": false, 00:13:13.235 "num_base_bdevs": 3, 00:13:13.235 "num_base_bdevs_discovered": 3, 00:13:13.235 "num_base_bdevs_operational": 3, 00:13:13.235 "base_bdevs_list": [ 00:13:13.235 { 00:13:13.235 "name": "BaseBdev1", 00:13:13.235 "uuid": "5758e9f4-dd42-46d5-9469-f33c5aa10d1b", 00:13:13.235 "is_configured": true, 00:13:13.235 "data_offset": 0, 00:13:13.235 "data_size": 65536 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "name": "BaseBdev2", 00:13:13.235 "uuid": "19bbaff6-09f2-41be-b331-0ed084909c41", 00:13:13.235 "is_configured": true, 00:13:13.235 "data_offset": 0, 00:13:13.235 "data_size": 65536 00:13:13.235 }, 00:13:13.235 { 00:13:13.235 "name": "BaseBdev3", 00:13:13.235 "uuid": "3debeea8-ceb4-436c-a376-71f925a6bb13", 00:13:13.235 "is_configured": true, 00:13:13.235 "data_offset": 0, 00:13:13.235 "data_size": 65536 00:13:13.235 } 00:13:13.235 ] 00:13:13.235 } 00:13:13.235 } 00:13:13.235 }' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:13.235 BaseBdev2 00:13:13.235 BaseBdev3' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.235 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.493 [2024-12-05 19:33:06.811558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.493 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.752 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.752 "name": "Existed_Raid", 00:13:13.752 "uuid": "7ad529fe-7374-46ad-98c8-48661b0c4b89", 00:13:13.752 "strip_size_kb": 0, 00:13:13.752 "state": "online", 00:13:13.752 "raid_level": "raid1", 00:13:13.752 "superblock": false, 00:13:13.752 "num_base_bdevs": 3, 00:13:13.752 "num_base_bdevs_discovered": 2, 00:13:13.752 "num_base_bdevs_operational": 2, 00:13:13.752 "base_bdevs_list": [ 00:13:13.752 { 00:13:13.752 "name": null, 00:13:13.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.752 "is_configured": false, 00:13:13.752 "data_offset": 0, 00:13:13.752 "data_size": 65536 00:13:13.752 }, 00:13:13.752 { 00:13:13.752 "name": "BaseBdev2", 00:13:13.752 "uuid": "19bbaff6-09f2-41be-b331-0ed084909c41", 00:13:13.752 "is_configured": true, 00:13:13.752 "data_offset": 0, 00:13:13.752 "data_size": 65536 00:13:13.752 }, 00:13:13.752 { 00:13:13.752 "name": "BaseBdev3", 00:13:13.752 "uuid": "3debeea8-ceb4-436c-a376-71f925a6bb13", 00:13:13.752 "is_configured": true, 00:13:13.752 "data_offset": 0, 00:13:13.752 "data_size": 65536 00:13:13.752 } 00:13:13.752 ] 00:13:13.752 }' 00:13:13.752 19:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.752 19:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.010 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 [2024-12-05 19:33:07.465986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.268 [2024-12-05 19:33:07.613226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:14.268 [2024-12-05 19:33:07.613512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.268 [2024-12-05 19:33:07.694978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.268 [2024-12-05 19:33:07.695225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.268 [2024-12-05 19:33:07.695261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.268 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.527 BaseBdev2 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.527 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [ 00:13:14.528 { 00:13:14.528 "name": "BaseBdev2", 00:13:14.528 "aliases": [ 00:13:14.528 "f11a4da8-523c-4682-92b0-67648d1f2a33" 00:13:14.528 ], 00:13:14.528 "product_name": "Malloc disk", 00:13:14.528 "block_size": 512, 00:13:14.528 "num_blocks": 65536, 00:13:14.528 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:14.528 "assigned_rate_limits": { 00:13:14.528 "rw_ios_per_sec": 0, 00:13:14.528 "rw_mbytes_per_sec": 0, 00:13:14.528 "r_mbytes_per_sec": 0, 00:13:14.528 "w_mbytes_per_sec": 0 00:13:14.528 }, 00:13:14.528 "claimed": false, 00:13:14.528 "zoned": false, 00:13:14.528 "supported_io_types": { 00:13:14.528 "read": true, 00:13:14.528 "write": true, 00:13:14.528 "unmap": true, 00:13:14.528 "flush": true, 00:13:14.528 "reset": true, 00:13:14.528 "nvme_admin": false, 00:13:14.528 "nvme_io": false, 00:13:14.528 "nvme_io_md": false, 00:13:14.528 "write_zeroes": true, 00:13:14.528 "zcopy": true, 00:13:14.528 "get_zone_info": false, 00:13:14.528 "zone_management": false, 00:13:14.528 "zone_append": false, 00:13:14.528 "compare": false, 00:13:14.528 "compare_and_write": false, 00:13:14.528 "abort": true, 00:13:14.528 "seek_hole": false, 00:13:14.528 "seek_data": false, 00:13:14.528 "copy": true, 00:13:14.528 "nvme_iov_md": false 00:13:14.528 }, 00:13:14.528 "memory_domains": [ 00:13:14.528 { 00:13:14.528 "dma_device_id": "system", 00:13:14.528 "dma_device_type": 1 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.528 "dma_device_type": 2 00:13:14.528 } 00:13:14.528 ], 00:13:14.528 "driver_specific": {} 00:13:14.528 } 00:13:14.528 ] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 BaseBdev3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [ 00:13:14.528 { 00:13:14.528 "name": "BaseBdev3", 00:13:14.528 "aliases": [ 00:13:14.528 "1a51adbb-d05e-4599-bad8-bf6e1fdce33d" 00:13:14.528 ], 00:13:14.528 "product_name": "Malloc disk", 00:13:14.528 "block_size": 512, 00:13:14.528 "num_blocks": 65536, 00:13:14.528 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:14.528 "assigned_rate_limits": { 00:13:14.528 "rw_ios_per_sec": 0, 00:13:14.528 "rw_mbytes_per_sec": 0, 00:13:14.528 "r_mbytes_per_sec": 0, 00:13:14.528 "w_mbytes_per_sec": 0 00:13:14.528 }, 00:13:14.528 "claimed": false, 00:13:14.528 "zoned": false, 00:13:14.528 "supported_io_types": { 00:13:14.528 "read": true, 00:13:14.528 "write": true, 00:13:14.528 "unmap": true, 00:13:14.528 "flush": true, 00:13:14.528 "reset": true, 00:13:14.528 "nvme_admin": false, 00:13:14.528 "nvme_io": false, 00:13:14.528 "nvme_io_md": false, 00:13:14.528 "write_zeroes": true, 00:13:14.528 "zcopy": true, 00:13:14.528 "get_zone_info": false, 00:13:14.528 "zone_management": false, 00:13:14.528 "zone_append": false, 00:13:14.528 "compare": false, 00:13:14.528 "compare_and_write": false, 00:13:14.528 "abort": true, 00:13:14.528 "seek_hole": false, 00:13:14.528 "seek_data": false, 00:13:14.528 "copy": true, 00:13:14.528 "nvme_iov_md": false 00:13:14.528 }, 00:13:14.528 "memory_domains": [ 00:13:14.528 { 00:13:14.528 "dma_device_id": "system", 00:13:14.528 "dma_device_type": 1 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.528 "dma_device_type": 2 00:13:14.528 } 00:13:14.528 ], 00:13:14.528 "driver_specific": {} 00:13:14.528 } 00:13:14.528 ] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 [2024-12-05 19:33:07.904318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.528 [2024-12-05 19:33:07.904551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.528 [2024-12-05 19:33:07.904590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.528 [2024-12-05 19:33:07.907023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.528 "name": "Existed_Raid", 00:13:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.528 "strip_size_kb": 0, 00:13:14.528 "state": "configuring", 00:13:14.528 "raid_level": "raid1", 00:13:14.528 "superblock": false, 00:13:14.528 "num_base_bdevs": 3, 00:13:14.528 "num_base_bdevs_discovered": 2, 00:13:14.528 "num_base_bdevs_operational": 3, 00:13:14.528 "base_bdevs_list": [ 00:13:14.528 { 00:13:14.528 "name": "BaseBdev1", 00:13:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.528 "is_configured": false, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 0 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "name": "BaseBdev2", 00:13:14.528 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:14.528 "is_configured": true, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 65536 00:13:14.528 }, 00:13:14.528 { 00:13:14.528 "name": "BaseBdev3", 00:13:14.528 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:14.528 "is_configured": true, 00:13:14.528 "data_offset": 0, 00:13:14.528 "data_size": 65536 00:13:14.528 } 00:13:14.528 ] 00:13:14.528 }' 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.528 19:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.096 [2024-12-05 19:33:08.432539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.096 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.097 "name": "Existed_Raid", 00:13:15.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.097 "strip_size_kb": 0, 00:13:15.097 "state": "configuring", 00:13:15.097 "raid_level": "raid1", 00:13:15.097 "superblock": false, 00:13:15.097 "num_base_bdevs": 3, 00:13:15.097 "num_base_bdevs_discovered": 1, 00:13:15.097 "num_base_bdevs_operational": 3, 00:13:15.097 "base_bdevs_list": [ 00:13:15.097 { 00:13:15.097 "name": "BaseBdev1", 00:13:15.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.097 "is_configured": false, 00:13:15.097 "data_offset": 0, 00:13:15.097 "data_size": 0 00:13:15.097 }, 00:13:15.097 { 00:13:15.097 "name": null, 00:13:15.097 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:15.097 "is_configured": false, 00:13:15.097 "data_offset": 0, 00:13:15.097 "data_size": 65536 00:13:15.097 }, 00:13:15.097 { 00:13:15.097 "name": "BaseBdev3", 00:13:15.097 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:15.097 "is_configured": true, 00:13:15.097 "data_offset": 0, 00:13:15.097 "data_size": 65536 00:13:15.097 } 00:13:15.097 ] 00:13:15.097 }' 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.097 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.664 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.664 19:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.664 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.665 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 19:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 [2024-12-05 19:33:09.051601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.665 BaseBdev1 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 [ 00:13:15.665 { 00:13:15.665 "name": "BaseBdev1", 00:13:15.665 "aliases": [ 00:13:15.665 "ab7e894a-a027-4d2f-9fdc-ee692c030085" 00:13:15.665 ], 00:13:15.665 "product_name": "Malloc disk", 00:13:15.665 "block_size": 512, 00:13:15.665 "num_blocks": 65536, 00:13:15.665 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:15.665 "assigned_rate_limits": { 00:13:15.665 "rw_ios_per_sec": 0, 00:13:15.665 "rw_mbytes_per_sec": 0, 00:13:15.665 "r_mbytes_per_sec": 0, 00:13:15.665 "w_mbytes_per_sec": 0 00:13:15.665 }, 00:13:15.665 "claimed": true, 00:13:15.665 "claim_type": "exclusive_write", 00:13:15.665 "zoned": false, 00:13:15.665 "supported_io_types": { 00:13:15.665 "read": true, 00:13:15.665 "write": true, 00:13:15.665 "unmap": true, 00:13:15.665 "flush": true, 00:13:15.665 "reset": true, 00:13:15.665 "nvme_admin": false, 00:13:15.665 "nvme_io": false, 00:13:15.665 "nvme_io_md": false, 00:13:15.665 "write_zeroes": true, 00:13:15.665 "zcopy": true, 00:13:15.665 "get_zone_info": false, 00:13:15.665 "zone_management": false, 00:13:15.665 "zone_append": false, 00:13:15.665 "compare": false, 00:13:15.665 "compare_and_write": false, 00:13:15.665 "abort": true, 00:13:15.665 "seek_hole": false, 00:13:15.665 "seek_data": false, 00:13:15.665 "copy": true, 00:13:15.665 "nvme_iov_md": false 00:13:15.665 }, 00:13:15.665 "memory_domains": [ 00:13:15.665 { 00:13:15.665 "dma_device_id": "system", 00:13:15.665 "dma_device_type": 1 00:13:15.665 }, 00:13:15.665 { 00:13:15.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.665 "dma_device_type": 2 00:13:15.665 } 00:13:15.665 ], 00:13:15.665 "driver_specific": {} 00:13:15.665 } 00:13:15.665 ] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.665 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.924 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.924 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.924 "name": "Existed_Raid", 00:13:15.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.924 "strip_size_kb": 0, 00:13:15.924 "state": "configuring", 00:13:15.924 "raid_level": "raid1", 00:13:15.924 "superblock": false, 00:13:15.924 "num_base_bdevs": 3, 00:13:15.924 "num_base_bdevs_discovered": 2, 00:13:15.924 "num_base_bdevs_operational": 3, 00:13:15.924 "base_bdevs_list": [ 00:13:15.924 { 00:13:15.924 "name": "BaseBdev1", 00:13:15.924 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:15.924 "is_configured": true, 00:13:15.924 "data_offset": 0, 00:13:15.924 "data_size": 65536 00:13:15.924 }, 00:13:15.924 { 00:13:15.924 "name": null, 00:13:15.924 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:15.924 "is_configured": false, 00:13:15.924 "data_offset": 0, 00:13:15.924 "data_size": 65536 00:13:15.924 }, 00:13:15.924 { 00:13:15.924 "name": "BaseBdev3", 00:13:15.924 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:15.924 "is_configured": true, 00:13:15.924 "data_offset": 0, 00:13:15.924 "data_size": 65536 00:13:15.924 } 00:13:15.924 ] 00:13:15.924 }' 00:13:15.924 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.924 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.182 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.182 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:16.182 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.182 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.441 [2024-12-05 19:33:09.667904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.441 "name": "Existed_Raid", 00:13:16.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.441 "strip_size_kb": 0, 00:13:16.441 "state": "configuring", 00:13:16.441 "raid_level": "raid1", 00:13:16.441 "superblock": false, 00:13:16.441 "num_base_bdevs": 3, 00:13:16.441 "num_base_bdevs_discovered": 1, 00:13:16.441 "num_base_bdevs_operational": 3, 00:13:16.441 "base_bdevs_list": [ 00:13:16.441 { 00:13:16.441 "name": "BaseBdev1", 00:13:16.441 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:16.441 "is_configured": true, 00:13:16.441 "data_offset": 0, 00:13:16.441 "data_size": 65536 00:13:16.441 }, 00:13:16.441 { 00:13:16.441 "name": null, 00:13:16.441 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:16.441 "is_configured": false, 00:13:16.441 "data_offset": 0, 00:13:16.441 "data_size": 65536 00:13:16.441 }, 00:13:16.441 { 00:13:16.441 "name": null, 00:13:16.441 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:16.441 "is_configured": false, 00:13:16.441 "data_offset": 0, 00:13:16.441 "data_size": 65536 00:13:16.441 } 00:13:16.441 ] 00:13:16.441 }' 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.441 19:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.008 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.008 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.008 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 [2024-12-05 19:33:10.256114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.009 "name": "Existed_Raid", 00:13:17.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.009 "strip_size_kb": 0, 00:13:17.009 "state": "configuring", 00:13:17.009 "raid_level": "raid1", 00:13:17.009 "superblock": false, 00:13:17.009 "num_base_bdevs": 3, 00:13:17.009 "num_base_bdevs_discovered": 2, 00:13:17.009 "num_base_bdevs_operational": 3, 00:13:17.009 "base_bdevs_list": [ 00:13:17.009 { 00:13:17.009 "name": "BaseBdev1", 00:13:17.009 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:17.009 "is_configured": true, 00:13:17.009 "data_offset": 0, 00:13:17.009 "data_size": 65536 00:13:17.009 }, 00:13:17.009 { 00:13:17.009 "name": null, 00:13:17.009 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:17.009 "is_configured": false, 00:13:17.009 "data_offset": 0, 00:13:17.009 "data_size": 65536 00:13:17.009 }, 00:13:17.009 { 00:13:17.009 "name": "BaseBdev3", 00:13:17.009 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:17.009 "is_configured": true, 00:13:17.009 "data_offset": 0, 00:13:17.009 "data_size": 65536 00:13:17.009 } 00:13:17.009 ] 00:13:17.009 }' 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.009 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.576 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.577 [2024-12-05 19:33:10.836370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.577 "name": "Existed_Raid", 00:13:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.577 "strip_size_kb": 0, 00:13:17.577 "state": "configuring", 00:13:17.577 "raid_level": "raid1", 00:13:17.577 "superblock": false, 00:13:17.577 "num_base_bdevs": 3, 00:13:17.577 "num_base_bdevs_discovered": 1, 00:13:17.577 "num_base_bdevs_operational": 3, 00:13:17.577 "base_bdevs_list": [ 00:13:17.577 { 00:13:17.577 "name": null, 00:13:17.577 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:17.577 "is_configured": false, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 65536 00:13:17.577 }, 00:13:17.577 { 00:13:17.577 "name": null, 00:13:17.577 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:17.577 "is_configured": false, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 65536 00:13:17.577 }, 00:13:17.577 { 00:13:17.577 "name": "BaseBdev3", 00:13:17.577 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:17.577 "is_configured": true, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 65536 00:13:17.577 } 00:13:17.577 ] 00:13:17.577 }' 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.577 19:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.145 [2024-12-05 19:33:11.504005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.145 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.146 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.146 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.146 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.146 "name": "Existed_Raid", 00:13:18.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.146 "strip_size_kb": 0, 00:13:18.146 "state": "configuring", 00:13:18.146 "raid_level": "raid1", 00:13:18.146 "superblock": false, 00:13:18.146 "num_base_bdevs": 3, 00:13:18.146 "num_base_bdevs_discovered": 2, 00:13:18.146 "num_base_bdevs_operational": 3, 00:13:18.146 "base_bdevs_list": [ 00:13:18.146 { 00:13:18.146 "name": null, 00:13:18.146 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:18.146 "is_configured": false, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 }, 00:13:18.146 { 00:13:18.146 "name": "BaseBdev2", 00:13:18.146 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:18.146 "is_configured": true, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 }, 00:13:18.146 { 00:13:18.146 "name": "BaseBdev3", 00:13:18.146 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:18.146 "is_configured": true, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 } 00:13:18.146 ] 00:13:18.146 }' 00:13:18.146 19:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.146 19:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab7e894a-a027-4d2f-9fdc-ee692c030085 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.714 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.974 [2024-12-05 19:33:12.164163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:18.974 [2024-12-05 19:33:12.164231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:18.974 [2024-12-05 19:33:12.164244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:18.974 [2024-12-05 19:33:12.164580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:18.974 [2024-12-05 19:33:12.164847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:18.974 [2024-12-05 19:33:12.164868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:18.974 [2024-12-05 19:33:12.165198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.974 NewBaseBdev 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.974 [ 00:13:18.974 { 00:13:18.974 "name": "NewBaseBdev", 00:13:18.974 "aliases": [ 00:13:18.974 "ab7e894a-a027-4d2f-9fdc-ee692c030085" 00:13:18.974 ], 00:13:18.974 "product_name": "Malloc disk", 00:13:18.974 "block_size": 512, 00:13:18.974 "num_blocks": 65536, 00:13:18.974 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:18.974 "assigned_rate_limits": { 00:13:18.974 "rw_ios_per_sec": 0, 00:13:18.974 "rw_mbytes_per_sec": 0, 00:13:18.974 "r_mbytes_per_sec": 0, 00:13:18.974 "w_mbytes_per_sec": 0 00:13:18.974 }, 00:13:18.974 "claimed": true, 00:13:18.974 "claim_type": "exclusive_write", 00:13:18.974 "zoned": false, 00:13:18.974 "supported_io_types": { 00:13:18.974 "read": true, 00:13:18.974 "write": true, 00:13:18.974 "unmap": true, 00:13:18.974 "flush": true, 00:13:18.974 "reset": true, 00:13:18.974 "nvme_admin": false, 00:13:18.974 "nvme_io": false, 00:13:18.974 "nvme_io_md": false, 00:13:18.974 "write_zeroes": true, 00:13:18.974 "zcopy": true, 00:13:18.974 "get_zone_info": false, 00:13:18.974 "zone_management": false, 00:13:18.974 "zone_append": false, 00:13:18.974 "compare": false, 00:13:18.974 "compare_and_write": false, 00:13:18.974 "abort": true, 00:13:18.974 "seek_hole": false, 00:13:18.974 "seek_data": false, 00:13:18.974 "copy": true, 00:13:18.974 "nvme_iov_md": false 00:13:18.974 }, 00:13:18.974 "memory_domains": [ 00:13:18.974 { 00:13:18.974 "dma_device_id": "system", 00:13:18.974 "dma_device_type": 1 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.974 "dma_device_type": 2 00:13:18.974 } 00:13:18.974 ], 00:13:18.974 "driver_specific": {} 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.974 "name": "Existed_Raid", 00:13:18.974 "uuid": "4560c0f9-1000-47e1-ad4d-399c3f054a6c", 00:13:18.974 "strip_size_kb": 0, 00:13:18.974 "state": "online", 00:13:18.974 "raid_level": "raid1", 00:13:18.974 "superblock": false, 00:13:18.974 "num_base_bdevs": 3, 00:13:18.974 "num_base_bdevs_discovered": 3, 00:13:18.974 "num_base_bdevs_operational": 3, 00:13:18.974 "base_bdevs_list": [ 00:13:18.974 { 00:13:18.974 "name": "NewBaseBdev", 00:13:18.974 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:18.974 "is_configured": true, 00:13:18.974 "data_offset": 0, 00:13:18.974 "data_size": 65536 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "name": "BaseBdev2", 00:13:18.974 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:18.974 "is_configured": true, 00:13:18.974 "data_offset": 0, 00:13:18.974 "data_size": 65536 00:13:18.974 }, 00:13:18.974 { 00:13:18.974 "name": "BaseBdev3", 00:13:18.974 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:18.974 "is_configured": true, 00:13:18.974 "data_offset": 0, 00:13:18.974 "data_size": 65536 00:13:18.974 } 00:13:18.974 ] 00:13:18.974 }' 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.974 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.542 [2024-12-05 19:33:12.744861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.542 "name": "Existed_Raid", 00:13:19.542 "aliases": [ 00:13:19.542 "4560c0f9-1000-47e1-ad4d-399c3f054a6c" 00:13:19.542 ], 00:13:19.542 "product_name": "Raid Volume", 00:13:19.542 "block_size": 512, 00:13:19.542 "num_blocks": 65536, 00:13:19.542 "uuid": "4560c0f9-1000-47e1-ad4d-399c3f054a6c", 00:13:19.542 "assigned_rate_limits": { 00:13:19.542 "rw_ios_per_sec": 0, 00:13:19.542 "rw_mbytes_per_sec": 0, 00:13:19.542 "r_mbytes_per_sec": 0, 00:13:19.542 "w_mbytes_per_sec": 0 00:13:19.542 }, 00:13:19.542 "claimed": false, 00:13:19.542 "zoned": false, 00:13:19.542 "supported_io_types": { 00:13:19.542 "read": true, 00:13:19.542 "write": true, 00:13:19.542 "unmap": false, 00:13:19.542 "flush": false, 00:13:19.542 "reset": true, 00:13:19.542 "nvme_admin": false, 00:13:19.542 "nvme_io": false, 00:13:19.542 "nvme_io_md": false, 00:13:19.542 "write_zeroes": true, 00:13:19.542 "zcopy": false, 00:13:19.542 "get_zone_info": false, 00:13:19.542 "zone_management": false, 00:13:19.542 "zone_append": false, 00:13:19.542 "compare": false, 00:13:19.542 "compare_and_write": false, 00:13:19.542 "abort": false, 00:13:19.542 "seek_hole": false, 00:13:19.542 "seek_data": false, 00:13:19.542 "copy": false, 00:13:19.542 "nvme_iov_md": false 00:13:19.542 }, 00:13:19.542 "memory_domains": [ 00:13:19.542 { 00:13:19.542 "dma_device_id": "system", 00:13:19.542 "dma_device_type": 1 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.542 "dma_device_type": 2 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "dma_device_id": "system", 00:13:19.542 "dma_device_type": 1 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.542 "dma_device_type": 2 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "dma_device_id": "system", 00:13:19.542 "dma_device_type": 1 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.542 "dma_device_type": 2 00:13:19.542 } 00:13:19.542 ], 00:13:19.542 "driver_specific": { 00:13:19.542 "raid": { 00:13:19.542 "uuid": "4560c0f9-1000-47e1-ad4d-399c3f054a6c", 00:13:19.542 "strip_size_kb": 0, 00:13:19.542 "state": "online", 00:13:19.542 "raid_level": "raid1", 00:13:19.542 "superblock": false, 00:13:19.542 "num_base_bdevs": 3, 00:13:19.542 "num_base_bdevs_discovered": 3, 00:13:19.542 "num_base_bdevs_operational": 3, 00:13:19.542 "base_bdevs_list": [ 00:13:19.542 { 00:13:19.542 "name": "NewBaseBdev", 00:13:19.542 "uuid": "ab7e894a-a027-4d2f-9fdc-ee692c030085", 00:13:19.542 "is_configured": true, 00:13:19.542 "data_offset": 0, 00:13:19.542 "data_size": 65536 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "name": "BaseBdev2", 00:13:19.542 "uuid": "f11a4da8-523c-4682-92b0-67648d1f2a33", 00:13:19.542 "is_configured": true, 00:13:19.542 "data_offset": 0, 00:13:19.542 "data_size": 65536 00:13:19.542 }, 00:13:19.542 { 00:13:19.542 "name": "BaseBdev3", 00:13:19.542 "uuid": "1a51adbb-d05e-4599-bad8-bf6e1fdce33d", 00:13:19.542 "is_configured": true, 00:13:19.542 "data_offset": 0, 00:13:19.542 "data_size": 65536 00:13:19.542 } 00:13:19.542 ] 00:13:19.542 } 00:13:19.542 } 00:13:19.542 }' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:19.542 BaseBdev2 00:13:19.542 BaseBdev3' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.542 19:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.801 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.801 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.801 19:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.801 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.802 [2024-12-05 19:33:13.056509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.802 [2024-12-05 19:33:13.056745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.802 [2024-12-05 19:33:13.056855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.802 [2024-12-05 19:33:13.057253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.802 [2024-12-05 19:33:13.057271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67431 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67431 ']' 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67431 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67431 00:13:19.802 killing process with pid 67431 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67431' 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67431 00:13:19.802 [2024-12-05 19:33:13.097189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.802 19:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67431 00:13:20.061 [2024-12-05 19:33:13.366077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.009 19:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:21.009 00:13:21.009 real 0m11.999s 00:13:21.009 user 0m19.878s 00:13:21.009 sys 0m1.688s 00:13:21.009 19:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.009 19:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.009 ************************************ 00:13:21.009 END TEST raid_state_function_test 00:13:21.009 ************************************ 00:13:21.286 19:33:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:21.286 19:33:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:21.286 19:33:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.286 19:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.287 ************************************ 00:13:21.287 START TEST raid_state_function_test_sb 00:13:21.287 ************************************ 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:21.287 Process raid pid: 68069 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68069 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68069' 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68069 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68069 ']' 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.287 19:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.287 [2024-12-05 19:33:14.591530] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:21.287 [2024-12-05 19:33:14.591749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.546 [2024-12-05 19:33:14.777483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.546 [2024-12-05 19:33:14.932980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.805 [2024-12-05 19:33:15.140114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.805 [2024-12-05 19:33:15.140373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.373 [2024-12-05 19:33:15.527821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.373 [2024-12-05 19:33:15.527892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.373 [2024-12-05 19:33:15.527910] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.373 [2024-12-05 19:33:15.527928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.373 [2024-12-05 19:33:15.527939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.373 [2024-12-05 19:33:15.527955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.373 "name": "Existed_Raid", 00:13:22.373 "uuid": "509c340b-9fc8-44bc-9510-19086a3346ea", 00:13:22.373 "strip_size_kb": 0, 00:13:22.373 "state": "configuring", 00:13:22.373 "raid_level": "raid1", 00:13:22.373 "superblock": true, 00:13:22.373 "num_base_bdevs": 3, 00:13:22.373 "num_base_bdevs_discovered": 0, 00:13:22.373 "num_base_bdevs_operational": 3, 00:13:22.373 "base_bdevs_list": [ 00:13:22.373 { 00:13:22.373 "name": "BaseBdev1", 00:13:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.373 "is_configured": false, 00:13:22.373 "data_offset": 0, 00:13:22.373 "data_size": 0 00:13:22.373 }, 00:13:22.373 { 00:13:22.373 "name": "BaseBdev2", 00:13:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.373 "is_configured": false, 00:13:22.373 "data_offset": 0, 00:13:22.373 "data_size": 0 00:13:22.373 }, 00:13:22.373 { 00:13:22.373 "name": "BaseBdev3", 00:13:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.373 "is_configured": false, 00:13:22.373 "data_offset": 0, 00:13:22.373 "data_size": 0 00:13:22.373 } 00:13:22.373 ] 00:13:22.373 }' 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.373 19:33:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.632 [2024-12-05 19:33:16.039937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.632 [2024-12-05 19:33:16.040128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.632 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.632 [2024-12-05 19:33:16.047921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.632 [2024-12-05 19:33:16.047980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.632 [2024-12-05 19:33:16.047997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.633 [2024-12-05 19:33:16.048014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.633 [2024-12-05 19:33:16.048041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.633 [2024-12-05 19:33:16.048056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.633 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.633 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.633 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.633 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 [2024-12-05 19:33:16.094363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.891 BaseBdev1 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 [ 00:13:22.891 { 00:13:22.891 "name": "BaseBdev1", 00:13:22.891 "aliases": [ 00:13:22.891 "36f8f303-68dc-4429-aa0f-3907994aa771" 00:13:22.891 ], 00:13:22.891 "product_name": "Malloc disk", 00:13:22.891 "block_size": 512, 00:13:22.891 "num_blocks": 65536, 00:13:22.891 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:22.891 "assigned_rate_limits": { 00:13:22.891 "rw_ios_per_sec": 0, 00:13:22.891 "rw_mbytes_per_sec": 0, 00:13:22.891 "r_mbytes_per_sec": 0, 00:13:22.891 "w_mbytes_per_sec": 0 00:13:22.891 }, 00:13:22.891 "claimed": true, 00:13:22.891 "claim_type": "exclusive_write", 00:13:22.891 "zoned": false, 00:13:22.891 "supported_io_types": { 00:13:22.891 "read": true, 00:13:22.891 "write": true, 00:13:22.891 "unmap": true, 00:13:22.891 "flush": true, 00:13:22.891 "reset": true, 00:13:22.891 "nvme_admin": false, 00:13:22.891 "nvme_io": false, 00:13:22.891 "nvme_io_md": false, 00:13:22.891 "write_zeroes": true, 00:13:22.891 "zcopy": true, 00:13:22.891 "get_zone_info": false, 00:13:22.891 "zone_management": false, 00:13:22.891 "zone_append": false, 00:13:22.891 "compare": false, 00:13:22.891 "compare_and_write": false, 00:13:22.891 "abort": true, 00:13:22.891 "seek_hole": false, 00:13:22.891 "seek_data": false, 00:13:22.891 "copy": true, 00:13:22.891 "nvme_iov_md": false 00:13:22.891 }, 00:13:22.891 "memory_domains": [ 00:13:22.891 { 00:13:22.891 "dma_device_id": "system", 00:13:22.891 "dma_device_type": 1 00:13:22.891 }, 00:13:22.891 { 00:13:22.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.891 "dma_device_type": 2 00:13:22.891 } 00:13:22.891 ], 00:13:22.891 "driver_specific": {} 00:13:22.891 } 00:13:22.891 ] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.891 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.891 "name": "Existed_Raid", 00:13:22.891 "uuid": "36f6dce8-f092-44b4-9b6b-169c7415e14a", 00:13:22.891 "strip_size_kb": 0, 00:13:22.891 "state": "configuring", 00:13:22.891 "raid_level": "raid1", 00:13:22.891 "superblock": true, 00:13:22.891 "num_base_bdevs": 3, 00:13:22.891 "num_base_bdevs_discovered": 1, 00:13:22.891 "num_base_bdevs_operational": 3, 00:13:22.891 "base_bdevs_list": [ 00:13:22.891 { 00:13:22.891 "name": "BaseBdev1", 00:13:22.891 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:22.892 "is_configured": true, 00:13:22.892 "data_offset": 2048, 00:13:22.892 "data_size": 63488 00:13:22.892 }, 00:13:22.892 { 00:13:22.892 "name": "BaseBdev2", 00:13:22.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.892 "is_configured": false, 00:13:22.892 "data_offset": 0, 00:13:22.892 "data_size": 0 00:13:22.892 }, 00:13:22.892 { 00:13:22.892 "name": "BaseBdev3", 00:13:22.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.892 "is_configured": false, 00:13:22.892 "data_offset": 0, 00:13:22.892 "data_size": 0 00:13:22.892 } 00:13:22.892 ] 00:13:22.892 }' 00:13:22.892 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.892 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 [2024-12-05 19:33:16.626580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.460 [2024-12-05 19:33:16.626644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 [2024-12-05 19:33:16.634610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.460 [2024-12-05 19:33:16.637228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.460 [2024-12-05 19:33:16.637286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.460 [2024-12-05 19:33:16.637305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.460 [2024-12-05 19:33:16.637322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.460 "name": "Existed_Raid", 00:13:23.460 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:23.460 "strip_size_kb": 0, 00:13:23.460 "state": "configuring", 00:13:23.460 "raid_level": "raid1", 00:13:23.460 "superblock": true, 00:13:23.460 "num_base_bdevs": 3, 00:13:23.460 "num_base_bdevs_discovered": 1, 00:13:23.460 "num_base_bdevs_operational": 3, 00:13:23.460 "base_bdevs_list": [ 00:13:23.460 { 00:13:23.460 "name": "BaseBdev1", 00:13:23.460 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:23.460 "is_configured": true, 00:13:23.460 "data_offset": 2048, 00:13:23.460 "data_size": 63488 00:13:23.460 }, 00:13:23.460 { 00:13:23.460 "name": "BaseBdev2", 00:13:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.460 "is_configured": false, 00:13:23.460 "data_offset": 0, 00:13:23.460 "data_size": 0 00:13:23.460 }, 00:13:23.460 { 00:13:23.460 "name": "BaseBdev3", 00:13:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.460 "is_configured": false, 00:13:23.460 "data_offset": 0, 00:13:23.460 "data_size": 0 00:13:23.460 } 00:13:23.460 ] 00:13:23.460 }' 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.460 19:33:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.718 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.718 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.718 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 [2024-12-05 19:33:17.175363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.977 BaseBdev2 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 [ 00:13:23.977 { 00:13:23.977 "name": "BaseBdev2", 00:13:23.977 "aliases": [ 00:13:23.977 "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d" 00:13:23.977 ], 00:13:23.977 "product_name": "Malloc disk", 00:13:23.977 "block_size": 512, 00:13:23.977 "num_blocks": 65536, 00:13:23.977 "uuid": "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d", 00:13:23.977 "assigned_rate_limits": { 00:13:23.977 "rw_ios_per_sec": 0, 00:13:23.977 "rw_mbytes_per_sec": 0, 00:13:23.977 "r_mbytes_per_sec": 0, 00:13:23.977 "w_mbytes_per_sec": 0 00:13:23.977 }, 00:13:23.977 "claimed": true, 00:13:23.977 "claim_type": "exclusive_write", 00:13:23.977 "zoned": false, 00:13:23.977 "supported_io_types": { 00:13:23.977 "read": true, 00:13:23.977 "write": true, 00:13:23.977 "unmap": true, 00:13:23.977 "flush": true, 00:13:23.977 "reset": true, 00:13:23.977 "nvme_admin": false, 00:13:23.977 "nvme_io": false, 00:13:23.977 "nvme_io_md": false, 00:13:23.977 "write_zeroes": true, 00:13:23.977 "zcopy": true, 00:13:23.977 "get_zone_info": false, 00:13:23.977 "zone_management": false, 00:13:23.977 "zone_append": false, 00:13:23.977 "compare": false, 00:13:23.977 "compare_and_write": false, 00:13:23.977 "abort": true, 00:13:23.977 "seek_hole": false, 00:13:23.977 "seek_data": false, 00:13:23.977 "copy": true, 00:13:23.977 "nvme_iov_md": false 00:13:23.977 }, 00:13:23.977 "memory_domains": [ 00:13:23.977 { 00:13:23.977 "dma_device_id": "system", 00:13:23.977 "dma_device_type": 1 00:13:23.977 }, 00:13:23.977 { 00:13:23.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.977 "dma_device_type": 2 00:13:23.977 } 00:13:23.977 ], 00:13:23.977 "driver_specific": {} 00:13:23.977 } 00:13:23.977 ] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.977 "name": "Existed_Raid", 00:13:23.977 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:23.977 "strip_size_kb": 0, 00:13:23.977 "state": "configuring", 00:13:23.977 "raid_level": "raid1", 00:13:23.977 "superblock": true, 00:13:23.977 "num_base_bdevs": 3, 00:13:23.977 "num_base_bdevs_discovered": 2, 00:13:23.977 "num_base_bdevs_operational": 3, 00:13:23.977 "base_bdevs_list": [ 00:13:23.977 { 00:13:23.977 "name": "BaseBdev1", 00:13:23.977 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:23.977 "is_configured": true, 00:13:23.977 "data_offset": 2048, 00:13:23.977 "data_size": 63488 00:13:23.977 }, 00:13:23.977 { 00:13:23.977 "name": "BaseBdev2", 00:13:23.977 "uuid": "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d", 00:13:23.977 "is_configured": true, 00:13:23.977 "data_offset": 2048, 00:13:23.977 "data_size": 63488 00:13:23.977 }, 00:13:23.977 { 00:13:23.977 "name": "BaseBdev3", 00:13:23.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.977 "is_configured": false, 00:13:23.977 "data_offset": 0, 00:13:23.977 "data_size": 0 00:13:23.977 } 00:13:23.977 ] 00:13:23.977 }' 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.977 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.546 [2024-12-05 19:33:17.781522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.546 [2024-12-05 19:33:17.781902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:24.546 [2024-12-05 19:33:17.781933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.546 BaseBdev3 00:13:24.546 [2024-12-05 19:33:17.782283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:24.546 [2024-12-05 19:33:17.782501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:24.546 [2024-12-05 19:33:17.782518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:24.546 [2024-12-05 19:33:17.782706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.546 [ 00:13:24.546 { 00:13:24.546 "name": "BaseBdev3", 00:13:24.546 "aliases": [ 00:13:24.546 "8022a7fb-2335-4001-a1ac-4350bc103fcd" 00:13:24.546 ], 00:13:24.546 "product_name": "Malloc disk", 00:13:24.546 "block_size": 512, 00:13:24.546 "num_blocks": 65536, 00:13:24.546 "uuid": "8022a7fb-2335-4001-a1ac-4350bc103fcd", 00:13:24.546 "assigned_rate_limits": { 00:13:24.546 "rw_ios_per_sec": 0, 00:13:24.546 "rw_mbytes_per_sec": 0, 00:13:24.546 "r_mbytes_per_sec": 0, 00:13:24.546 "w_mbytes_per_sec": 0 00:13:24.546 }, 00:13:24.546 "claimed": true, 00:13:24.546 "claim_type": "exclusive_write", 00:13:24.546 "zoned": false, 00:13:24.546 "supported_io_types": { 00:13:24.546 "read": true, 00:13:24.546 "write": true, 00:13:24.546 "unmap": true, 00:13:24.546 "flush": true, 00:13:24.546 "reset": true, 00:13:24.546 "nvme_admin": false, 00:13:24.546 "nvme_io": false, 00:13:24.546 "nvme_io_md": false, 00:13:24.546 "write_zeroes": true, 00:13:24.546 "zcopy": true, 00:13:24.546 "get_zone_info": false, 00:13:24.546 "zone_management": false, 00:13:24.546 "zone_append": false, 00:13:24.546 "compare": false, 00:13:24.546 "compare_and_write": false, 00:13:24.546 "abort": true, 00:13:24.546 "seek_hole": false, 00:13:24.546 "seek_data": false, 00:13:24.546 "copy": true, 00:13:24.546 "nvme_iov_md": false 00:13:24.546 }, 00:13:24.546 "memory_domains": [ 00:13:24.546 { 00:13:24.546 "dma_device_id": "system", 00:13:24.546 "dma_device_type": 1 00:13:24.546 }, 00:13:24.546 { 00:13:24.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.546 "dma_device_type": 2 00:13:24.546 } 00:13:24.546 ], 00:13:24.546 "driver_specific": {} 00:13:24.546 } 00:13:24.546 ] 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.546 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.547 "name": "Existed_Raid", 00:13:24.547 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:24.547 "strip_size_kb": 0, 00:13:24.547 "state": "online", 00:13:24.547 "raid_level": "raid1", 00:13:24.547 "superblock": true, 00:13:24.547 "num_base_bdevs": 3, 00:13:24.547 "num_base_bdevs_discovered": 3, 00:13:24.547 "num_base_bdevs_operational": 3, 00:13:24.547 "base_bdevs_list": [ 00:13:24.547 { 00:13:24.547 "name": "BaseBdev1", 00:13:24.547 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 }, 00:13:24.547 { 00:13:24.547 "name": "BaseBdev2", 00:13:24.547 "uuid": "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 }, 00:13:24.547 { 00:13:24.547 "name": "BaseBdev3", 00:13:24.547 "uuid": "8022a7fb-2335-4001-a1ac-4350bc103fcd", 00:13:24.547 "is_configured": true, 00:13:24.547 "data_offset": 2048, 00:13:24.547 "data_size": 63488 00:13:24.547 } 00:13:24.547 ] 00:13:24.547 }' 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.547 19:33:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.115 [2024-12-05 19:33:18.334247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.115 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.115 "name": "Existed_Raid", 00:13:25.115 "aliases": [ 00:13:25.115 "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4" 00:13:25.115 ], 00:13:25.115 "product_name": "Raid Volume", 00:13:25.115 "block_size": 512, 00:13:25.115 "num_blocks": 63488, 00:13:25.115 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:25.115 "assigned_rate_limits": { 00:13:25.115 "rw_ios_per_sec": 0, 00:13:25.115 "rw_mbytes_per_sec": 0, 00:13:25.115 "r_mbytes_per_sec": 0, 00:13:25.115 "w_mbytes_per_sec": 0 00:13:25.115 }, 00:13:25.115 "claimed": false, 00:13:25.115 "zoned": false, 00:13:25.115 "supported_io_types": { 00:13:25.115 "read": true, 00:13:25.115 "write": true, 00:13:25.115 "unmap": false, 00:13:25.115 "flush": false, 00:13:25.115 "reset": true, 00:13:25.115 "nvme_admin": false, 00:13:25.115 "nvme_io": false, 00:13:25.115 "nvme_io_md": false, 00:13:25.115 "write_zeroes": true, 00:13:25.115 "zcopy": false, 00:13:25.115 "get_zone_info": false, 00:13:25.115 "zone_management": false, 00:13:25.115 "zone_append": false, 00:13:25.115 "compare": false, 00:13:25.115 "compare_and_write": false, 00:13:25.115 "abort": false, 00:13:25.115 "seek_hole": false, 00:13:25.115 "seek_data": false, 00:13:25.115 "copy": false, 00:13:25.115 "nvme_iov_md": false 00:13:25.115 }, 00:13:25.115 "memory_domains": [ 00:13:25.115 { 00:13:25.115 "dma_device_id": "system", 00:13:25.115 "dma_device_type": 1 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.115 "dma_device_type": 2 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "dma_device_id": "system", 00:13:25.115 "dma_device_type": 1 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.115 "dma_device_type": 2 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "dma_device_id": "system", 00:13:25.115 "dma_device_type": 1 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.115 "dma_device_type": 2 00:13:25.115 } 00:13:25.115 ], 00:13:25.115 "driver_specific": { 00:13:25.115 "raid": { 00:13:25.115 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:25.115 "strip_size_kb": 0, 00:13:25.115 "state": "online", 00:13:25.115 "raid_level": "raid1", 00:13:25.115 "superblock": true, 00:13:25.115 "num_base_bdevs": 3, 00:13:25.115 "num_base_bdevs_discovered": 3, 00:13:25.115 "num_base_bdevs_operational": 3, 00:13:25.115 "base_bdevs_list": [ 00:13:25.115 { 00:13:25.115 "name": "BaseBdev1", 00:13:25.115 "uuid": "36f8f303-68dc-4429-aa0f-3907994aa771", 00:13:25.115 "is_configured": true, 00:13:25.115 "data_offset": 2048, 00:13:25.115 "data_size": 63488 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "name": "BaseBdev2", 00:13:25.115 "uuid": "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d", 00:13:25.115 "is_configured": true, 00:13:25.115 "data_offset": 2048, 00:13:25.115 "data_size": 63488 00:13:25.115 }, 00:13:25.115 { 00:13:25.115 "name": "BaseBdev3", 00:13:25.115 "uuid": "8022a7fb-2335-4001-a1ac-4350bc103fcd", 00:13:25.115 "is_configured": true, 00:13:25.115 "data_offset": 2048, 00:13:25.115 "data_size": 63488 00:13:25.115 } 00:13:25.115 ] 00:13:25.115 } 00:13:25.115 } 00:13:25.115 }' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:25.116 BaseBdev2 00:13:25.116 BaseBdev3' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.375 [2024-12-05 19:33:18.658000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.375 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.376 "name": "Existed_Raid", 00:13:25.376 "uuid": "5e7dfc35-37c6-4ba7-a2de-aea5e68442a4", 00:13:25.376 "strip_size_kb": 0, 00:13:25.376 "state": "online", 00:13:25.376 "raid_level": "raid1", 00:13:25.376 "superblock": true, 00:13:25.376 "num_base_bdevs": 3, 00:13:25.376 "num_base_bdevs_discovered": 2, 00:13:25.376 "num_base_bdevs_operational": 2, 00:13:25.376 "base_bdevs_list": [ 00:13:25.376 { 00:13:25.376 "name": null, 00:13:25.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.376 "is_configured": false, 00:13:25.376 "data_offset": 0, 00:13:25.376 "data_size": 63488 00:13:25.376 }, 00:13:25.376 { 00:13:25.376 "name": "BaseBdev2", 00:13:25.376 "uuid": "9a7695d7-3bc4-466a-bb7f-5f40fa969b8d", 00:13:25.376 "is_configured": true, 00:13:25.376 "data_offset": 2048, 00:13:25.376 "data_size": 63488 00:13:25.376 }, 00:13:25.376 { 00:13:25.376 "name": "BaseBdev3", 00:13:25.376 "uuid": "8022a7fb-2335-4001-a1ac-4350bc103fcd", 00:13:25.376 "is_configured": true, 00:13:25.376 "data_offset": 2048, 00:13:25.376 "data_size": 63488 00:13:25.376 } 00:13:25.376 ] 00:13:25.376 }' 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.376 19:33:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 [2024-12-05 19:33:19.299572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.951 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 [2024-12-05 19:33:19.445865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.210 [2024-12-05 19:33:19.446002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.210 [2024-12-05 19:33:19.529427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.210 [2024-12-05 19:33:19.529516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.210 [2024-12-05 19:33:19.529537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 BaseBdev2 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.210 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.468 [ 00:13:26.468 { 00:13:26.468 "name": "BaseBdev2", 00:13:26.468 "aliases": [ 00:13:26.468 "d58a754a-8b1c-445c-b195-fd3f79dc373f" 00:13:26.468 ], 00:13:26.468 "product_name": "Malloc disk", 00:13:26.468 "block_size": 512, 00:13:26.468 "num_blocks": 65536, 00:13:26.468 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:26.468 "assigned_rate_limits": { 00:13:26.468 "rw_ios_per_sec": 0, 00:13:26.469 "rw_mbytes_per_sec": 0, 00:13:26.469 "r_mbytes_per_sec": 0, 00:13:26.469 "w_mbytes_per_sec": 0 00:13:26.469 }, 00:13:26.469 "claimed": false, 00:13:26.469 "zoned": false, 00:13:26.469 "supported_io_types": { 00:13:26.469 "read": true, 00:13:26.469 "write": true, 00:13:26.469 "unmap": true, 00:13:26.469 "flush": true, 00:13:26.469 "reset": true, 00:13:26.469 "nvme_admin": false, 00:13:26.469 "nvme_io": false, 00:13:26.469 "nvme_io_md": false, 00:13:26.469 "write_zeroes": true, 00:13:26.469 "zcopy": true, 00:13:26.469 "get_zone_info": false, 00:13:26.469 "zone_management": false, 00:13:26.469 "zone_append": false, 00:13:26.469 "compare": false, 00:13:26.469 "compare_and_write": false, 00:13:26.469 "abort": true, 00:13:26.469 "seek_hole": false, 00:13:26.469 "seek_data": false, 00:13:26.469 "copy": true, 00:13:26.469 "nvme_iov_md": false 00:13:26.469 }, 00:13:26.469 "memory_domains": [ 00:13:26.469 { 00:13:26.469 "dma_device_id": "system", 00:13:26.469 "dma_device_type": 1 00:13:26.469 }, 00:13:26.469 { 00:13:26.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.469 "dma_device_type": 2 00:13:26.469 } 00:13:26.469 ], 00:13:26.469 "driver_specific": {} 00:13:26.469 } 00:13:26.469 ] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 BaseBdev3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 [ 00:13:26.469 { 00:13:26.469 "name": "BaseBdev3", 00:13:26.469 "aliases": [ 00:13:26.469 "9b38dc57-dc92-4d78-b880-64becfab7180" 00:13:26.469 ], 00:13:26.469 "product_name": "Malloc disk", 00:13:26.469 "block_size": 512, 00:13:26.469 "num_blocks": 65536, 00:13:26.469 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:26.469 "assigned_rate_limits": { 00:13:26.469 "rw_ios_per_sec": 0, 00:13:26.469 "rw_mbytes_per_sec": 0, 00:13:26.469 "r_mbytes_per_sec": 0, 00:13:26.469 "w_mbytes_per_sec": 0 00:13:26.469 }, 00:13:26.469 "claimed": false, 00:13:26.469 "zoned": false, 00:13:26.469 "supported_io_types": { 00:13:26.469 "read": true, 00:13:26.469 "write": true, 00:13:26.469 "unmap": true, 00:13:26.469 "flush": true, 00:13:26.469 "reset": true, 00:13:26.469 "nvme_admin": false, 00:13:26.469 "nvme_io": false, 00:13:26.469 "nvme_io_md": false, 00:13:26.469 "write_zeroes": true, 00:13:26.469 "zcopy": true, 00:13:26.469 "get_zone_info": false, 00:13:26.469 "zone_management": false, 00:13:26.469 "zone_append": false, 00:13:26.469 "compare": false, 00:13:26.469 "compare_and_write": false, 00:13:26.469 "abort": true, 00:13:26.469 "seek_hole": false, 00:13:26.469 "seek_data": false, 00:13:26.469 "copy": true, 00:13:26.469 "nvme_iov_md": false 00:13:26.469 }, 00:13:26.469 "memory_domains": [ 00:13:26.469 { 00:13:26.469 "dma_device_id": "system", 00:13:26.469 "dma_device_type": 1 00:13:26.469 }, 00:13:26.469 { 00:13:26.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.469 "dma_device_type": 2 00:13:26.469 } 00:13:26.469 ], 00:13:26.469 "driver_specific": {} 00:13:26.469 } 00:13:26.469 ] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 [2024-12-05 19:33:19.747742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.469 [2024-12-05 19:33:19.747937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.469 [2024-12-05 19:33:19.747980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.469 [2024-12-05 19:33:19.750614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.469 "name": "Existed_Raid", 00:13:26.469 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:26.469 "strip_size_kb": 0, 00:13:26.469 "state": "configuring", 00:13:26.469 "raid_level": "raid1", 00:13:26.469 "superblock": true, 00:13:26.469 "num_base_bdevs": 3, 00:13:26.469 "num_base_bdevs_discovered": 2, 00:13:26.469 "num_base_bdevs_operational": 3, 00:13:26.469 "base_bdevs_list": [ 00:13:26.469 { 00:13:26.469 "name": "BaseBdev1", 00:13:26.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.469 "is_configured": false, 00:13:26.469 "data_offset": 0, 00:13:26.469 "data_size": 0 00:13:26.469 }, 00:13:26.469 { 00:13:26.469 "name": "BaseBdev2", 00:13:26.469 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:26.469 "is_configured": true, 00:13:26.469 "data_offset": 2048, 00:13:26.469 "data_size": 63488 00:13:26.469 }, 00:13:26.469 { 00:13:26.469 "name": "BaseBdev3", 00:13:26.469 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:26.469 "is_configured": true, 00:13:26.469 "data_offset": 2048, 00:13:26.469 "data_size": 63488 00:13:26.469 } 00:13:26.469 ] 00:13:26.469 }' 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.469 19:33:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.036 [2024-12-05 19:33:20.267892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.036 "name": "Existed_Raid", 00:13:27.036 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:27.036 "strip_size_kb": 0, 00:13:27.036 "state": "configuring", 00:13:27.036 "raid_level": "raid1", 00:13:27.036 "superblock": true, 00:13:27.036 "num_base_bdevs": 3, 00:13:27.036 "num_base_bdevs_discovered": 1, 00:13:27.036 "num_base_bdevs_operational": 3, 00:13:27.036 "base_bdevs_list": [ 00:13:27.036 { 00:13:27.036 "name": "BaseBdev1", 00:13:27.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.036 "is_configured": false, 00:13:27.036 "data_offset": 0, 00:13:27.036 "data_size": 0 00:13:27.036 }, 00:13:27.036 { 00:13:27.036 "name": null, 00:13:27.036 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:27.036 "is_configured": false, 00:13:27.036 "data_offset": 0, 00:13:27.036 "data_size": 63488 00:13:27.036 }, 00:13:27.036 { 00:13:27.036 "name": "BaseBdev3", 00:13:27.036 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:27.036 "is_configured": true, 00:13:27.036 "data_offset": 2048, 00:13:27.036 "data_size": 63488 00:13:27.036 } 00:13:27.036 ] 00:13:27.036 }' 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.036 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 [2024-12-05 19:33:20.859903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.602 BaseBdev1 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 [ 00:13:27.602 { 00:13:27.602 "name": "BaseBdev1", 00:13:27.602 "aliases": [ 00:13:27.602 "65089a42-dfc8-415d-bb35-6eda24b7a97d" 00:13:27.602 ], 00:13:27.602 "product_name": "Malloc disk", 00:13:27.602 "block_size": 512, 00:13:27.602 "num_blocks": 65536, 00:13:27.602 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:27.602 "assigned_rate_limits": { 00:13:27.602 "rw_ios_per_sec": 0, 00:13:27.602 "rw_mbytes_per_sec": 0, 00:13:27.602 "r_mbytes_per_sec": 0, 00:13:27.602 "w_mbytes_per_sec": 0 00:13:27.602 }, 00:13:27.602 "claimed": true, 00:13:27.602 "claim_type": "exclusive_write", 00:13:27.602 "zoned": false, 00:13:27.602 "supported_io_types": { 00:13:27.602 "read": true, 00:13:27.602 "write": true, 00:13:27.602 "unmap": true, 00:13:27.602 "flush": true, 00:13:27.602 "reset": true, 00:13:27.602 "nvme_admin": false, 00:13:27.602 "nvme_io": false, 00:13:27.602 "nvme_io_md": false, 00:13:27.602 "write_zeroes": true, 00:13:27.602 "zcopy": true, 00:13:27.602 "get_zone_info": false, 00:13:27.602 "zone_management": false, 00:13:27.602 "zone_append": false, 00:13:27.602 "compare": false, 00:13:27.602 "compare_and_write": false, 00:13:27.602 "abort": true, 00:13:27.602 "seek_hole": false, 00:13:27.602 "seek_data": false, 00:13:27.602 "copy": true, 00:13:27.602 "nvme_iov_md": false 00:13:27.602 }, 00:13:27.602 "memory_domains": [ 00:13:27.602 { 00:13:27.602 "dma_device_id": "system", 00:13:27.602 "dma_device_type": 1 00:13:27.602 }, 00:13:27.602 { 00:13:27.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.602 "dma_device_type": 2 00:13:27.602 } 00:13:27.602 ], 00:13:27.602 "driver_specific": {} 00:13:27.602 } 00:13:27.602 ] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.602 "name": "Existed_Raid", 00:13:27.602 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:27.602 "strip_size_kb": 0, 00:13:27.602 "state": "configuring", 00:13:27.602 "raid_level": "raid1", 00:13:27.602 "superblock": true, 00:13:27.602 "num_base_bdevs": 3, 00:13:27.602 "num_base_bdevs_discovered": 2, 00:13:27.602 "num_base_bdevs_operational": 3, 00:13:27.602 "base_bdevs_list": [ 00:13:27.602 { 00:13:27.602 "name": "BaseBdev1", 00:13:27.602 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:27.602 "is_configured": true, 00:13:27.602 "data_offset": 2048, 00:13:27.602 "data_size": 63488 00:13:27.602 }, 00:13:27.602 { 00:13:27.602 "name": null, 00:13:27.602 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:27.602 "is_configured": false, 00:13:27.602 "data_offset": 0, 00:13:27.602 "data_size": 63488 00:13:27.602 }, 00:13:27.602 { 00:13:27.602 "name": "BaseBdev3", 00:13:27.602 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:27.602 "is_configured": true, 00:13:27.602 "data_offset": 2048, 00:13:27.602 "data_size": 63488 00:13:27.602 } 00:13:27.602 ] 00:13:27.602 }' 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.602 19:33:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 [2024-12-05 19:33:21.464196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.170 "name": "Existed_Raid", 00:13:28.170 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:28.170 "strip_size_kb": 0, 00:13:28.170 "state": "configuring", 00:13:28.170 "raid_level": "raid1", 00:13:28.170 "superblock": true, 00:13:28.170 "num_base_bdevs": 3, 00:13:28.170 "num_base_bdevs_discovered": 1, 00:13:28.170 "num_base_bdevs_operational": 3, 00:13:28.170 "base_bdevs_list": [ 00:13:28.170 { 00:13:28.170 "name": "BaseBdev1", 00:13:28.170 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:28.170 "is_configured": true, 00:13:28.170 "data_offset": 2048, 00:13:28.170 "data_size": 63488 00:13:28.170 }, 00:13:28.170 { 00:13:28.170 "name": null, 00:13:28.170 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:28.170 "is_configured": false, 00:13:28.170 "data_offset": 0, 00:13:28.170 "data_size": 63488 00:13:28.170 }, 00:13:28.170 { 00:13:28.170 "name": null, 00:13:28.170 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:28.170 "is_configured": false, 00:13:28.170 "data_offset": 0, 00:13:28.170 "data_size": 63488 00:13:28.170 } 00:13:28.170 ] 00:13:28.170 }' 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.170 19:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.738 [2024-12-05 19:33:22.060431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.738 "name": "Existed_Raid", 00:13:28.738 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:28.738 "strip_size_kb": 0, 00:13:28.738 "state": "configuring", 00:13:28.738 "raid_level": "raid1", 00:13:28.738 "superblock": true, 00:13:28.738 "num_base_bdevs": 3, 00:13:28.738 "num_base_bdevs_discovered": 2, 00:13:28.738 "num_base_bdevs_operational": 3, 00:13:28.738 "base_bdevs_list": [ 00:13:28.738 { 00:13:28.738 "name": "BaseBdev1", 00:13:28.738 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:28.738 "is_configured": true, 00:13:28.738 "data_offset": 2048, 00:13:28.738 "data_size": 63488 00:13:28.738 }, 00:13:28.738 { 00:13:28.738 "name": null, 00:13:28.738 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:28.738 "is_configured": false, 00:13:28.738 "data_offset": 0, 00:13:28.738 "data_size": 63488 00:13:28.738 }, 00:13:28.738 { 00:13:28.738 "name": "BaseBdev3", 00:13:28.738 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:28.738 "is_configured": true, 00:13:28.738 "data_offset": 2048, 00:13:28.738 "data_size": 63488 00:13:28.738 } 00:13:28.738 ] 00:13:28.738 }' 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.738 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.304 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.304 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.304 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 [2024-12-05 19:33:22.628616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.562 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.562 "name": "Existed_Raid", 00:13:29.562 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:29.562 "strip_size_kb": 0, 00:13:29.562 "state": "configuring", 00:13:29.562 "raid_level": "raid1", 00:13:29.562 "superblock": true, 00:13:29.562 "num_base_bdevs": 3, 00:13:29.562 "num_base_bdevs_discovered": 1, 00:13:29.562 "num_base_bdevs_operational": 3, 00:13:29.562 "base_bdevs_list": [ 00:13:29.562 { 00:13:29.562 "name": null, 00:13:29.563 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:29.563 "is_configured": false, 00:13:29.563 "data_offset": 0, 00:13:29.563 "data_size": 63488 00:13:29.563 }, 00:13:29.563 { 00:13:29.563 "name": null, 00:13:29.563 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:29.563 "is_configured": false, 00:13:29.563 "data_offset": 0, 00:13:29.563 "data_size": 63488 00:13:29.563 }, 00:13:29.563 { 00:13:29.563 "name": "BaseBdev3", 00:13:29.563 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:29.563 "is_configured": true, 00:13:29.563 "data_offset": 2048, 00:13:29.563 "data_size": 63488 00:13:29.563 } 00:13:29.563 ] 00:13:29.563 }' 00:13:29.563 19:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.563 19:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.821 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.821 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.821 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.079 [2024-12-05 19:33:23.288665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.079 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.080 "name": "Existed_Raid", 00:13:30.080 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:30.080 "strip_size_kb": 0, 00:13:30.080 "state": "configuring", 00:13:30.080 "raid_level": "raid1", 00:13:30.080 "superblock": true, 00:13:30.080 "num_base_bdevs": 3, 00:13:30.080 "num_base_bdevs_discovered": 2, 00:13:30.080 "num_base_bdevs_operational": 3, 00:13:30.080 "base_bdevs_list": [ 00:13:30.080 { 00:13:30.080 "name": null, 00:13:30.080 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:30.080 "is_configured": false, 00:13:30.080 "data_offset": 0, 00:13:30.080 "data_size": 63488 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "BaseBdev2", 00:13:30.080 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:30.080 "is_configured": true, 00:13:30.080 "data_offset": 2048, 00:13:30.080 "data_size": 63488 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "BaseBdev3", 00:13:30.080 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:30.080 "is_configured": true, 00:13:30.080 "data_offset": 2048, 00:13:30.080 "data_size": 63488 00:13:30.080 } 00:13:30.080 ] 00:13:30.080 }' 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.080 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65089a42-dfc8-415d-bb35-6eda24b7a97d 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 [2024-12-05 19:33:23.944837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:30.647 [2024-12-05 19:33:23.945126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:30.647 [2024-12-05 19:33:23.945145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.647 NewBaseBdev 00:13:30.647 [2024-12-05 19:33:23.945481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:30.647 [2024-12-05 19:33:23.945670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:30.647 [2024-12-05 19:33:23.945692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:30.647 [2024-12-05 19:33:23.945878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.647 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.647 [ 00:13:30.647 { 00:13:30.647 "name": "NewBaseBdev", 00:13:30.647 "aliases": [ 00:13:30.647 "65089a42-dfc8-415d-bb35-6eda24b7a97d" 00:13:30.647 ], 00:13:30.648 "product_name": "Malloc disk", 00:13:30.648 "block_size": 512, 00:13:30.648 "num_blocks": 65536, 00:13:30.648 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:30.648 "assigned_rate_limits": { 00:13:30.648 "rw_ios_per_sec": 0, 00:13:30.648 "rw_mbytes_per_sec": 0, 00:13:30.648 "r_mbytes_per_sec": 0, 00:13:30.648 "w_mbytes_per_sec": 0 00:13:30.648 }, 00:13:30.648 "claimed": true, 00:13:30.648 "claim_type": "exclusive_write", 00:13:30.648 "zoned": false, 00:13:30.648 "supported_io_types": { 00:13:30.648 "read": true, 00:13:30.648 "write": true, 00:13:30.648 "unmap": true, 00:13:30.648 "flush": true, 00:13:30.648 "reset": true, 00:13:30.648 "nvme_admin": false, 00:13:30.648 "nvme_io": false, 00:13:30.648 "nvme_io_md": false, 00:13:30.648 "write_zeroes": true, 00:13:30.648 "zcopy": true, 00:13:30.648 "get_zone_info": false, 00:13:30.648 "zone_management": false, 00:13:30.648 "zone_append": false, 00:13:30.648 "compare": false, 00:13:30.648 "compare_and_write": false, 00:13:30.648 "abort": true, 00:13:30.648 "seek_hole": false, 00:13:30.648 "seek_data": false, 00:13:30.648 "copy": true, 00:13:30.648 "nvme_iov_md": false 00:13:30.648 }, 00:13:30.648 "memory_domains": [ 00:13:30.648 { 00:13:30.648 "dma_device_id": "system", 00:13:30.648 "dma_device_type": 1 00:13:30.648 }, 00:13:30.648 { 00:13:30.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.648 "dma_device_type": 2 00:13:30.648 } 00:13:30.648 ], 00:13:30.648 "driver_specific": {} 00:13:30.648 } 00:13:30.648 ] 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.648 19:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.648 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.648 "name": "Existed_Raid", 00:13:30.648 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:30.648 "strip_size_kb": 0, 00:13:30.648 "state": "online", 00:13:30.648 "raid_level": "raid1", 00:13:30.648 "superblock": true, 00:13:30.648 "num_base_bdevs": 3, 00:13:30.648 "num_base_bdevs_discovered": 3, 00:13:30.648 "num_base_bdevs_operational": 3, 00:13:30.648 "base_bdevs_list": [ 00:13:30.648 { 00:13:30.648 "name": "NewBaseBdev", 00:13:30.648 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:30.648 "is_configured": true, 00:13:30.648 "data_offset": 2048, 00:13:30.648 "data_size": 63488 00:13:30.648 }, 00:13:30.648 { 00:13:30.648 "name": "BaseBdev2", 00:13:30.648 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:30.648 "is_configured": true, 00:13:30.648 "data_offset": 2048, 00:13:30.648 "data_size": 63488 00:13:30.648 }, 00:13:30.648 { 00:13:30.648 "name": "BaseBdev3", 00:13:30.648 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:30.648 "is_configured": true, 00:13:30.648 "data_offset": 2048, 00:13:30.648 "data_size": 63488 00:13:30.648 } 00:13:30.648 ] 00:13:30.648 }' 00:13:30.648 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.648 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.214 [2024-12-05 19:33:24.493422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.214 "name": "Existed_Raid", 00:13:31.214 "aliases": [ 00:13:31.214 "9c3ba9e4-5b22-482c-a561-bdd3b6c95178" 00:13:31.214 ], 00:13:31.214 "product_name": "Raid Volume", 00:13:31.214 "block_size": 512, 00:13:31.214 "num_blocks": 63488, 00:13:31.214 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:31.214 "assigned_rate_limits": { 00:13:31.214 "rw_ios_per_sec": 0, 00:13:31.214 "rw_mbytes_per_sec": 0, 00:13:31.214 "r_mbytes_per_sec": 0, 00:13:31.214 "w_mbytes_per_sec": 0 00:13:31.214 }, 00:13:31.214 "claimed": false, 00:13:31.214 "zoned": false, 00:13:31.214 "supported_io_types": { 00:13:31.214 "read": true, 00:13:31.214 "write": true, 00:13:31.214 "unmap": false, 00:13:31.214 "flush": false, 00:13:31.214 "reset": true, 00:13:31.214 "nvme_admin": false, 00:13:31.214 "nvme_io": false, 00:13:31.214 "nvme_io_md": false, 00:13:31.214 "write_zeroes": true, 00:13:31.214 "zcopy": false, 00:13:31.214 "get_zone_info": false, 00:13:31.214 "zone_management": false, 00:13:31.214 "zone_append": false, 00:13:31.214 "compare": false, 00:13:31.214 "compare_and_write": false, 00:13:31.214 "abort": false, 00:13:31.214 "seek_hole": false, 00:13:31.214 "seek_data": false, 00:13:31.214 "copy": false, 00:13:31.214 "nvme_iov_md": false 00:13:31.214 }, 00:13:31.214 "memory_domains": [ 00:13:31.214 { 00:13:31.214 "dma_device_id": "system", 00:13:31.214 "dma_device_type": 1 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.214 "dma_device_type": 2 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "dma_device_id": "system", 00:13:31.214 "dma_device_type": 1 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.214 "dma_device_type": 2 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "dma_device_id": "system", 00:13:31.214 "dma_device_type": 1 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.214 "dma_device_type": 2 00:13:31.214 } 00:13:31.214 ], 00:13:31.214 "driver_specific": { 00:13:31.214 "raid": { 00:13:31.214 "uuid": "9c3ba9e4-5b22-482c-a561-bdd3b6c95178", 00:13:31.214 "strip_size_kb": 0, 00:13:31.214 "state": "online", 00:13:31.214 "raid_level": "raid1", 00:13:31.214 "superblock": true, 00:13:31.214 "num_base_bdevs": 3, 00:13:31.214 "num_base_bdevs_discovered": 3, 00:13:31.214 "num_base_bdevs_operational": 3, 00:13:31.214 "base_bdevs_list": [ 00:13:31.214 { 00:13:31.214 "name": "NewBaseBdev", 00:13:31.214 "uuid": "65089a42-dfc8-415d-bb35-6eda24b7a97d", 00:13:31.214 "is_configured": true, 00:13:31.214 "data_offset": 2048, 00:13:31.214 "data_size": 63488 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "name": "BaseBdev2", 00:13:31.214 "uuid": "d58a754a-8b1c-445c-b195-fd3f79dc373f", 00:13:31.214 "is_configured": true, 00:13:31.214 "data_offset": 2048, 00:13:31.214 "data_size": 63488 00:13:31.214 }, 00:13:31.214 { 00:13:31.214 "name": "BaseBdev3", 00:13:31.214 "uuid": "9b38dc57-dc92-4d78-b880-64becfab7180", 00:13:31.214 "is_configured": true, 00:13:31.214 "data_offset": 2048, 00:13:31.214 "data_size": 63488 00:13:31.214 } 00:13:31.214 ] 00:13:31.214 } 00:13:31.214 } 00:13:31.214 }' 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:31.214 BaseBdev2 00:13:31.214 BaseBdev3' 00:13:31.214 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.472 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.473 [2024-12-05 19:33:24.821135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.473 [2024-12-05 19:33:24.821176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.473 [2024-12-05 19:33:24.821269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.473 [2024-12-05 19:33:24.821656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.473 [2024-12-05 19:33:24.821674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68069 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68069 ']' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68069 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68069 00:13:31.473 killing process with pid 68069 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68069' 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68069 00:13:31.473 [2024-12-05 19:33:24.858753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.473 19:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68069 00:13:31.730 [2024-12-05 19:33:25.129896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.104 19:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:33.104 00:13:33.104 real 0m11.705s 00:13:33.104 user 0m19.348s 00:13:33.104 sys 0m1.652s 00:13:33.104 19:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.104 ************************************ 00:13:33.104 END TEST raid_state_function_test_sb 00:13:33.104 ************************************ 00:13:33.104 19:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.104 19:33:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:33.104 19:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:33.104 19:33:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.104 19:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.104 ************************************ 00:13:33.104 START TEST raid_superblock_test 00:13:33.104 ************************************ 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68706 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68706 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68706 ']' 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.104 19:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.104 [2024-12-05 19:33:26.348114] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:33.104 [2024-12-05 19:33:26.348792] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68706 ] 00:13:33.104 [2024-12-05 19:33:26.533282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.362 [2024-12-05 19:33:26.667281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.621 [2024-12-05 19:33:26.881294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.621 [2024-12-05 19:33:26.881378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.880 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.139 malloc1 00:13:34.139 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.139 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.139 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.139 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.139 [2024-12-05 19:33:27.366076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.139 [2024-12-05 19:33:27.366189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.140 [2024-12-05 19:33:27.366218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.140 [2024-12-05 19:33:27.366233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.140 [2024-12-05 19:33:27.369327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.140 [2024-12-05 19:33:27.369374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.140 pt1 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 malloc2 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 [2024-12-05 19:33:27.424070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.140 [2024-12-05 19:33:27.424313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.140 [2024-12-05 19:33:27.424363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.140 [2024-12-05 19:33:27.424379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.140 [2024-12-05 19:33:27.427171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.140 [2024-12-05 19:33:27.427218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.140 pt2 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 malloc3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 [2024-12-05 19:33:27.488610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.140 [2024-12-05 19:33:27.488879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.140 [2024-12-05 19:33:27.488925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.140 [2024-12-05 19:33:27.488942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.140 [2024-12-05 19:33:27.491910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.140 [2024-12-05 19:33:27.492106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.140 pt3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 [2024-12-05 19:33:27.500770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.140 [2024-12-05 19:33:27.503551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.140 [2024-12-05 19:33:27.503828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:34.140 [2024-12-05 19:33:27.504182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.140 [2024-12-05 19:33:27.504330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.140 [2024-12-05 19:33:27.504687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:34.140 [2024-12-05 19:33:27.505067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.140 [2024-12-05 19:33:27.505204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.140 [2024-12-05 19:33:27.505570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.140 "name": "raid_bdev1", 00:13:34.140 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:34.140 "strip_size_kb": 0, 00:13:34.140 "state": "online", 00:13:34.140 "raid_level": "raid1", 00:13:34.140 "superblock": true, 00:13:34.140 "num_base_bdevs": 3, 00:13:34.140 "num_base_bdevs_discovered": 3, 00:13:34.140 "num_base_bdevs_operational": 3, 00:13:34.140 "base_bdevs_list": [ 00:13:34.140 { 00:13:34.140 "name": "pt1", 00:13:34.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.140 "is_configured": true, 00:13:34.140 "data_offset": 2048, 00:13:34.140 "data_size": 63488 00:13:34.140 }, 00:13:34.140 { 00:13:34.140 "name": "pt2", 00:13:34.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.140 "is_configured": true, 00:13:34.140 "data_offset": 2048, 00:13:34.140 "data_size": 63488 00:13:34.140 }, 00:13:34.140 { 00:13:34.140 "name": "pt3", 00:13:34.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.140 "is_configured": true, 00:13:34.140 "data_offset": 2048, 00:13:34.140 "data_size": 63488 00:13:34.140 } 00:13:34.140 ] 00:13:34.140 }' 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.140 19:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:34.708 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.709 [2024-12-05 19:33:28.034097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:34.709 "name": "raid_bdev1", 00:13:34.709 "aliases": [ 00:13:34.709 "8e31d3ce-f982-4c74-8d8e-7c557714937e" 00:13:34.709 ], 00:13:34.709 "product_name": "Raid Volume", 00:13:34.709 "block_size": 512, 00:13:34.709 "num_blocks": 63488, 00:13:34.709 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:34.709 "assigned_rate_limits": { 00:13:34.709 "rw_ios_per_sec": 0, 00:13:34.709 "rw_mbytes_per_sec": 0, 00:13:34.709 "r_mbytes_per_sec": 0, 00:13:34.709 "w_mbytes_per_sec": 0 00:13:34.709 }, 00:13:34.709 "claimed": false, 00:13:34.709 "zoned": false, 00:13:34.709 "supported_io_types": { 00:13:34.709 "read": true, 00:13:34.709 "write": true, 00:13:34.709 "unmap": false, 00:13:34.709 "flush": false, 00:13:34.709 "reset": true, 00:13:34.709 "nvme_admin": false, 00:13:34.709 "nvme_io": false, 00:13:34.709 "nvme_io_md": false, 00:13:34.709 "write_zeroes": true, 00:13:34.709 "zcopy": false, 00:13:34.709 "get_zone_info": false, 00:13:34.709 "zone_management": false, 00:13:34.709 "zone_append": false, 00:13:34.709 "compare": false, 00:13:34.709 "compare_and_write": false, 00:13:34.709 "abort": false, 00:13:34.709 "seek_hole": false, 00:13:34.709 "seek_data": false, 00:13:34.709 "copy": false, 00:13:34.709 "nvme_iov_md": false 00:13:34.709 }, 00:13:34.709 "memory_domains": [ 00:13:34.709 { 00:13:34.709 "dma_device_id": "system", 00:13:34.709 "dma_device_type": 1 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.709 "dma_device_type": 2 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "dma_device_id": "system", 00:13:34.709 "dma_device_type": 1 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.709 "dma_device_type": 2 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "dma_device_id": "system", 00:13:34.709 "dma_device_type": 1 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.709 "dma_device_type": 2 00:13:34.709 } 00:13:34.709 ], 00:13:34.709 "driver_specific": { 00:13:34.709 "raid": { 00:13:34.709 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:34.709 "strip_size_kb": 0, 00:13:34.709 "state": "online", 00:13:34.709 "raid_level": "raid1", 00:13:34.709 "superblock": true, 00:13:34.709 "num_base_bdevs": 3, 00:13:34.709 "num_base_bdevs_discovered": 3, 00:13:34.709 "num_base_bdevs_operational": 3, 00:13:34.709 "base_bdevs_list": [ 00:13:34.709 { 00:13:34.709 "name": "pt1", 00:13:34.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.709 "is_configured": true, 00:13:34.709 "data_offset": 2048, 00:13:34.709 "data_size": 63488 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "name": "pt2", 00:13:34.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.709 "is_configured": true, 00:13:34.709 "data_offset": 2048, 00:13:34.709 "data_size": 63488 00:13:34.709 }, 00:13:34.709 { 00:13:34.709 "name": "pt3", 00:13:34.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.709 "is_configured": true, 00:13:34.709 "data_offset": 2048, 00:13:34.709 "data_size": 63488 00:13:34.709 } 00:13:34.709 ] 00:13:34.709 } 00:13:34.709 } 00:13:34.709 }' 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:34.709 pt2 00:13:34.709 pt3' 00:13:34.709 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.967 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.968 [2024-12-05 19:33:28.354082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8e31d3ce-f982-4c74-8d8e-7c557714937e 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8e31d3ce-f982-4c74-8d8e-7c557714937e ']' 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.968 [2024-12-05 19:33:28.401775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.968 [2024-12-05 19:33:28.401922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.968 [2024-12-05 19:33:28.402118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.968 [2024-12-05 19:33:28.402337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.968 [2024-12-05 19:33:28.402478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.968 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.226 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 [2024-12-05 19:33:28.545913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:35.227 [2024-12-05 19:33:28.548505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:35.227 [2024-12-05 19:33:28.548596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:35.227 [2024-12-05 19:33:28.548701] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:35.227 [2024-12-05 19:33:28.548934] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:35.227 [2024-12-05 19:33:28.549151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:35.227 [2024-12-05 19:33:28.549342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.227 [2024-12-05 19:33:28.549489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:35.227 request: 00:13:35.227 { 00:13:35.227 "name": "raid_bdev1", 00:13:35.227 "raid_level": "raid1", 00:13:35.227 "base_bdevs": [ 00:13:35.227 "malloc1", 00:13:35.227 "malloc2", 00:13:35.227 "malloc3" 00:13:35.227 ], 00:13:35.227 "superblock": false, 00:13:35.227 "method": "bdev_raid_create", 00:13:35.227 "req_id": 1 00:13:35.227 } 00:13:35.227 Got JSON-RPC error response 00:13:35.227 response: 00:13:35.227 { 00:13:35.227 "code": -17, 00:13:35.227 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:35.227 } 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 [2024-12-05 19:33:28.613884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:35.227 [2024-12-05 19:33:28.613947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.227 [2024-12-05 19:33:28.613977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:35.227 [2024-12-05 19:33:28.613992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.227 [2024-12-05 19:33:28.617051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.227 [2024-12-05 19:33:28.617144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:35.227 [2024-12-05 19:33:28.617261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:35.227 [2024-12-05 19:33:28.617334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.227 pt1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.485 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.485 "name": "raid_bdev1", 00:13:35.485 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:35.485 "strip_size_kb": 0, 00:13:35.485 "state": "configuring", 00:13:35.485 "raid_level": "raid1", 00:13:35.485 "superblock": true, 00:13:35.485 "num_base_bdevs": 3, 00:13:35.485 "num_base_bdevs_discovered": 1, 00:13:35.485 "num_base_bdevs_operational": 3, 00:13:35.485 "base_bdevs_list": [ 00:13:35.485 { 00:13:35.485 "name": "pt1", 00:13:35.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.485 "is_configured": true, 00:13:35.485 "data_offset": 2048, 00:13:35.485 "data_size": 63488 00:13:35.485 }, 00:13:35.485 { 00:13:35.485 "name": null, 00:13:35.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.485 "is_configured": false, 00:13:35.485 "data_offset": 2048, 00:13:35.485 "data_size": 63488 00:13:35.485 }, 00:13:35.485 { 00:13:35.485 "name": null, 00:13:35.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.485 "is_configured": false, 00:13:35.485 "data_offset": 2048, 00:13:35.485 "data_size": 63488 00:13:35.485 } 00:13:35.485 ] 00:13:35.485 }' 00:13:35.485 19:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.485 19:33:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.744 [2024-12-05 19:33:29.126141] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.744 [2024-12-05 19:33:29.126239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.744 [2024-12-05 19:33:29.126274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:35.744 [2024-12-05 19:33:29.126289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.744 [2024-12-05 19:33:29.126929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.744 [2024-12-05 19:33:29.126967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.744 [2024-12-05 19:33:29.127081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:35.744 [2024-12-05 19:33:29.127120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.744 pt2 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.744 [2024-12-05 19:33:29.134113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.744 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.003 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.003 "name": "raid_bdev1", 00:13:36.003 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:36.003 "strip_size_kb": 0, 00:13:36.003 "state": "configuring", 00:13:36.003 "raid_level": "raid1", 00:13:36.003 "superblock": true, 00:13:36.003 "num_base_bdevs": 3, 00:13:36.003 "num_base_bdevs_discovered": 1, 00:13:36.003 "num_base_bdevs_operational": 3, 00:13:36.003 "base_bdevs_list": [ 00:13:36.003 { 00:13:36.003 "name": "pt1", 00:13:36.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.003 "is_configured": true, 00:13:36.003 "data_offset": 2048, 00:13:36.003 "data_size": 63488 00:13:36.003 }, 00:13:36.003 { 00:13:36.003 "name": null, 00:13:36.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.003 "is_configured": false, 00:13:36.003 "data_offset": 0, 00:13:36.003 "data_size": 63488 00:13:36.003 }, 00:13:36.003 { 00:13:36.003 "name": null, 00:13:36.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.003 "is_configured": false, 00:13:36.003 "data_offset": 2048, 00:13:36.003 "data_size": 63488 00:13:36.003 } 00:13:36.003 ] 00:13:36.003 }' 00:13:36.003 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.003 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.261 [2024-12-05 19:33:29.654227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.261 [2024-12-05 19:33:29.654335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.261 [2024-12-05 19:33:29.654368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:36.261 [2024-12-05 19:33:29.654385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.261 [2024-12-05 19:33:29.655036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.261 [2024-12-05 19:33:29.655104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.261 [2024-12-05 19:33:29.655216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.261 [2024-12-05 19:33:29.655263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.261 pt2 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:36.261 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.262 [2024-12-05 19:33:29.662174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:36.262 [2024-12-05 19:33:29.662233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.262 [2024-12-05 19:33:29.662259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:36.262 [2024-12-05 19:33:29.662275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.262 [2024-12-05 19:33:29.662757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.262 [2024-12-05 19:33:29.662799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:36.262 [2024-12-05 19:33:29.662875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:36.262 [2024-12-05 19:33:29.662908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:36.262 [2024-12-05 19:33:29.663064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:36.262 [2024-12-05 19:33:29.663088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:36.262 [2024-12-05 19:33:29.663393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:36.262 [2024-12-05 19:33:29.663602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:36.262 [2024-12-05 19:33:29.663618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:36.262 [2024-12-05 19:33:29.663827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.262 pt3 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.262 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.520 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.520 "name": "raid_bdev1", 00:13:36.520 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:36.520 "strip_size_kb": 0, 00:13:36.520 "state": "online", 00:13:36.520 "raid_level": "raid1", 00:13:36.520 "superblock": true, 00:13:36.520 "num_base_bdevs": 3, 00:13:36.520 "num_base_bdevs_discovered": 3, 00:13:36.520 "num_base_bdevs_operational": 3, 00:13:36.520 "base_bdevs_list": [ 00:13:36.520 { 00:13:36.520 "name": "pt1", 00:13:36.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.520 "is_configured": true, 00:13:36.520 "data_offset": 2048, 00:13:36.520 "data_size": 63488 00:13:36.520 }, 00:13:36.520 { 00:13:36.520 "name": "pt2", 00:13:36.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.520 "is_configured": true, 00:13:36.520 "data_offset": 2048, 00:13:36.520 "data_size": 63488 00:13:36.520 }, 00:13:36.520 { 00:13:36.520 "name": "pt3", 00:13:36.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.520 "is_configured": true, 00:13:36.520 "data_offset": 2048, 00:13:36.520 "data_size": 63488 00:13:36.520 } 00:13:36.520 ] 00:13:36.520 }' 00:13:36.520 19:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.520 19:33:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.779 [2024-12-05 19:33:30.166837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.779 "name": "raid_bdev1", 00:13:36.779 "aliases": [ 00:13:36.779 "8e31d3ce-f982-4c74-8d8e-7c557714937e" 00:13:36.779 ], 00:13:36.779 "product_name": "Raid Volume", 00:13:36.779 "block_size": 512, 00:13:36.779 "num_blocks": 63488, 00:13:36.779 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:36.779 "assigned_rate_limits": { 00:13:36.779 "rw_ios_per_sec": 0, 00:13:36.779 "rw_mbytes_per_sec": 0, 00:13:36.779 "r_mbytes_per_sec": 0, 00:13:36.779 "w_mbytes_per_sec": 0 00:13:36.779 }, 00:13:36.779 "claimed": false, 00:13:36.779 "zoned": false, 00:13:36.779 "supported_io_types": { 00:13:36.779 "read": true, 00:13:36.779 "write": true, 00:13:36.779 "unmap": false, 00:13:36.779 "flush": false, 00:13:36.779 "reset": true, 00:13:36.779 "nvme_admin": false, 00:13:36.779 "nvme_io": false, 00:13:36.779 "nvme_io_md": false, 00:13:36.779 "write_zeroes": true, 00:13:36.779 "zcopy": false, 00:13:36.779 "get_zone_info": false, 00:13:36.779 "zone_management": false, 00:13:36.779 "zone_append": false, 00:13:36.779 "compare": false, 00:13:36.779 "compare_and_write": false, 00:13:36.779 "abort": false, 00:13:36.779 "seek_hole": false, 00:13:36.779 "seek_data": false, 00:13:36.779 "copy": false, 00:13:36.779 "nvme_iov_md": false 00:13:36.779 }, 00:13:36.779 "memory_domains": [ 00:13:36.779 { 00:13:36.779 "dma_device_id": "system", 00:13:36.779 "dma_device_type": 1 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.779 "dma_device_type": 2 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "dma_device_id": "system", 00:13:36.779 "dma_device_type": 1 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.779 "dma_device_type": 2 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "dma_device_id": "system", 00:13:36.779 "dma_device_type": 1 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.779 "dma_device_type": 2 00:13:36.779 } 00:13:36.779 ], 00:13:36.779 "driver_specific": { 00:13:36.779 "raid": { 00:13:36.779 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:36.779 "strip_size_kb": 0, 00:13:36.779 "state": "online", 00:13:36.779 "raid_level": "raid1", 00:13:36.779 "superblock": true, 00:13:36.779 "num_base_bdevs": 3, 00:13:36.779 "num_base_bdevs_discovered": 3, 00:13:36.779 "num_base_bdevs_operational": 3, 00:13:36.779 "base_bdevs_list": [ 00:13:36.779 { 00:13:36.779 "name": "pt1", 00:13:36.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.779 "is_configured": true, 00:13:36.779 "data_offset": 2048, 00:13:36.779 "data_size": 63488 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "name": "pt2", 00:13:36.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.779 "is_configured": true, 00:13:36.779 "data_offset": 2048, 00:13:36.779 "data_size": 63488 00:13:36.779 }, 00:13:36.779 { 00:13:36.779 "name": "pt3", 00:13:36.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.779 "is_configured": true, 00:13:36.779 "data_offset": 2048, 00:13:36.779 "data_size": 63488 00:13:36.779 } 00:13:36.779 ] 00:13:36.779 } 00:13:36.779 } 00:13:36.779 }' 00:13:36.779 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.073 pt2 00:13:37.073 pt3' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.073 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.073 [2024-12-05 19:33:30.494872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8e31d3ce-f982-4c74-8d8e-7c557714937e '!=' 8e31d3ce-f982-4c74-8d8e-7c557714937e ']' 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.332 [2024-12-05 19:33:30.542581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.332 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.332 "name": "raid_bdev1", 00:13:37.332 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:37.332 "strip_size_kb": 0, 00:13:37.332 "state": "online", 00:13:37.332 "raid_level": "raid1", 00:13:37.332 "superblock": true, 00:13:37.332 "num_base_bdevs": 3, 00:13:37.332 "num_base_bdevs_discovered": 2, 00:13:37.332 "num_base_bdevs_operational": 2, 00:13:37.332 "base_bdevs_list": [ 00:13:37.332 { 00:13:37.332 "name": null, 00:13:37.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.332 "is_configured": false, 00:13:37.332 "data_offset": 0, 00:13:37.332 "data_size": 63488 00:13:37.332 }, 00:13:37.332 { 00:13:37.332 "name": "pt2", 00:13:37.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.333 "is_configured": true, 00:13:37.333 "data_offset": 2048, 00:13:37.333 "data_size": 63488 00:13:37.333 }, 00:13:37.333 { 00:13:37.333 "name": "pt3", 00:13:37.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.333 "is_configured": true, 00:13:37.333 "data_offset": 2048, 00:13:37.333 "data_size": 63488 00:13:37.333 } 00:13:37.333 ] 00:13:37.333 }' 00:13:37.333 19:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.333 19:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 [2024-12-05 19:33:31.082708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.902 [2024-12-05 19:33:31.082761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.902 [2024-12-05 19:33:31.082875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.902 [2024-12-05 19:33:31.082957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.902 [2024-12-05 19:33:31.082981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 [2024-12-05 19:33:31.162678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.902 [2024-12-05 19:33:31.162811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.902 [2024-12-05 19:33:31.162840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:37.902 [2024-12-05 19:33:31.162857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.902 [2024-12-05 19:33:31.165761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.902 [2024-12-05 19:33:31.165813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.902 [2024-12-05 19:33:31.165912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:37.902 [2024-12-05 19:33:31.165980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.902 pt2 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.902 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.902 "name": "raid_bdev1", 00:13:37.902 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:37.902 "strip_size_kb": 0, 00:13:37.902 "state": "configuring", 00:13:37.902 "raid_level": "raid1", 00:13:37.902 "superblock": true, 00:13:37.902 "num_base_bdevs": 3, 00:13:37.902 "num_base_bdevs_discovered": 1, 00:13:37.902 "num_base_bdevs_operational": 2, 00:13:37.902 "base_bdevs_list": [ 00:13:37.902 { 00:13:37.902 "name": null, 00:13:37.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.902 "is_configured": false, 00:13:37.902 "data_offset": 2048, 00:13:37.902 "data_size": 63488 00:13:37.902 }, 00:13:37.902 { 00:13:37.902 "name": "pt2", 00:13:37.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.902 "is_configured": true, 00:13:37.903 "data_offset": 2048, 00:13:37.903 "data_size": 63488 00:13:37.903 }, 00:13:37.903 { 00:13:37.903 "name": null, 00:13:37.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.903 "is_configured": false, 00:13:37.903 "data_offset": 2048, 00:13:37.903 "data_size": 63488 00:13:37.903 } 00:13:37.903 ] 00:13:37.903 }' 00:13:37.903 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.903 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.469 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.469 [2024-12-05 19:33:31.678976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.469 [2024-12-05 19:33:31.679065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.469 [2024-12-05 19:33:31.679112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:38.469 [2024-12-05 19:33:31.679161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.469 [2024-12-05 19:33:31.679833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.470 [2024-12-05 19:33:31.679876] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.470 [2024-12-05 19:33:31.679987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:38.470 [2024-12-05 19:33:31.680028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.470 [2024-12-05 19:33:31.680177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.470 [2024-12-05 19:33:31.680206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.470 [2024-12-05 19:33:31.680537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:38.470 [2024-12-05 19:33:31.680775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.470 [2024-12-05 19:33:31.680793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:38.470 [2024-12-05 19:33:31.680969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.470 pt3 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.470 "name": "raid_bdev1", 00:13:38.470 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:38.470 "strip_size_kb": 0, 00:13:38.470 "state": "online", 00:13:38.470 "raid_level": "raid1", 00:13:38.470 "superblock": true, 00:13:38.470 "num_base_bdevs": 3, 00:13:38.470 "num_base_bdevs_discovered": 2, 00:13:38.470 "num_base_bdevs_operational": 2, 00:13:38.470 "base_bdevs_list": [ 00:13:38.470 { 00:13:38.470 "name": null, 00:13:38.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.470 "is_configured": false, 00:13:38.470 "data_offset": 2048, 00:13:38.470 "data_size": 63488 00:13:38.470 }, 00:13:38.470 { 00:13:38.470 "name": "pt2", 00:13:38.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.470 "is_configured": true, 00:13:38.470 "data_offset": 2048, 00:13:38.470 "data_size": 63488 00:13:38.470 }, 00:13:38.470 { 00:13:38.470 "name": "pt3", 00:13:38.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.470 "is_configured": true, 00:13:38.470 "data_offset": 2048, 00:13:38.470 "data_size": 63488 00:13:38.470 } 00:13:38.470 ] 00:13:38.470 }' 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.470 19:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.039 [2024-12-05 19:33:32.187076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.039 [2024-12-05 19:33:32.187145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.039 [2024-12-05 19:33:32.187271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.039 [2024-12-05 19:33:32.187353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.039 [2024-12-05 19:33:32.187369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.039 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.040 [2024-12-05 19:33:32.259115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.040 [2024-12-05 19:33:32.259206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.040 [2024-12-05 19:33:32.259237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:39.040 [2024-12-05 19:33:32.259252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.040 [2024-12-05 19:33:32.262392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.040 [2024-12-05 19:33:32.262437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.040 [2024-12-05 19:33:32.262557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:39.040 [2024-12-05 19:33:32.262618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.040 [2024-12-05 19:33:32.262846] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:39.040 [2024-12-05 19:33:32.262866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.040 [2024-12-05 19:33:32.262889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:39.040 [2024-12-05 19:33:32.262961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.040 pt1 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.040 "name": "raid_bdev1", 00:13:39.040 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:39.040 "strip_size_kb": 0, 00:13:39.040 "state": "configuring", 00:13:39.040 "raid_level": "raid1", 00:13:39.040 "superblock": true, 00:13:39.040 "num_base_bdevs": 3, 00:13:39.040 "num_base_bdevs_discovered": 1, 00:13:39.040 "num_base_bdevs_operational": 2, 00:13:39.040 "base_bdevs_list": [ 00:13:39.040 { 00:13:39.040 "name": null, 00:13:39.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.040 "is_configured": false, 00:13:39.040 "data_offset": 2048, 00:13:39.040 "data_size": 63488 00:13:39.040 }, 00:13:39.040 { 00:13:39.040 "name": "pt2", 00:13:39.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.040 "is_configured": true, 00:13:39.040 "data_offset": 2048, 00:13:39.040 "data_size": 63488 00:13:39.040 }, 00:13:39.040 { 00:13:39.040 "name": null, 00:13:39.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.040 "is_configured": false, 00:13:39.040 "data_offset": 2048, 00:13:39.040 "data_size": 63488 00:13:39.040 } 00:13:39.040 ] 00:13:39.040 }' 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.040 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.608 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.608 [2024-12-05 19:33:32.827314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.608 [2024-12-05 19:33:32.827410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.608 [2024-12-05 19:33:32.827445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:39.608 [2024-12-05 19:33:32.827459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.608 [2024-12-05 19:33:32.828159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.608 [2024-12-05 19:33:32.828190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.608 [2024-12-05 19:33:32.828326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.608 [2024-12-05 19:33:32.828363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.608 [2024-12-05 19:33:32.828526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:39.608 [2024-12-05 19:33:32.828542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.608 [2024-12-05 19:33:32.828882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:39.608 [2024-12-05 19:33:32.829086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:39.608 [2024-12-05 19:33:32.829112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:39.608 [2024-12-05 19:33:32.829314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.608 pt3 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.609 "name": "raid_bdev1", 00:13:39.609 "uuid": "8e31d3ce-f982-4c74-8d8e-7c557714937e", 00:13:39.609 "strip_size_kb": 0, 00:13:39.609 "state": "online", 00:13:39.609 "raid_level": "raid1", 00:13:39.609 "superblock": true, 00:13:39.609 "num_base_bdevs": 3, 00:13:39.609 "num_base_bdevs_discovered": 2, 00:13:39.609 "num_base_bdevs_operational": 2, 00:13:39.609 "base_bdevs_list": [ 00:13:39.609 { 00:13:39.609 "name": null, 00:13:39.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.609 "is_configured": false, 00:13:39.609 "data_offset": 2048, 00:13:39.609 "data_size": 63488 00:13:39.609 }, 00:13:39.609 { 00:13:39.609 "name": "pt2", 00:13:39.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.609 "is_configured": true, 00:13:39.609 "data_offset": 2048, 00:13:39.609 "data_size": 63488 00:13:39.609 }, 00:13:39.609 { 00:13:39.609 "name": "pt3", 00:13:39.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.609 "is_configured": true, 00:13:39.609 "data_offset": 2048, 00:13:39.609 "data_size": 63488 00:13:39.609 } 00:13:39.609 ] 00:13:39.609 }' 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.609 19:33:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:40.178 [2024-12-05 19:33:33.411933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8e31d3ce-f982-4c74-8d8e-7c557714937e '!=' 8e31d3ce-f982-4c74-8d8e-7c557714937e ']' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68706 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68706 ']' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68706 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68706 00:13:40.178 killing process with pid 68706 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68706' 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68706 00:13:40.178 [2024-12-05 19:33:33.489529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.178 19:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68706 00:13:40.178 [2024-12-05 19:33:33.489637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.178 [2024-12-05 19:33:33.489731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.178 [2024-12-05 19:33:33.489764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:40.437 [2024-12-05 19:33:33.752282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.375 ************************************ 00:13:41.375 END TEST raid_superblock_test 00:13:41.375 ************************************ 00:13:41.375 19:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:41.375 00:13:41.375 real 0m8.546s 00:13:41.375 user 0m13.982s 00:13:41.375 sys 0m1.207s 00:13:41.375 19:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.375 19:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 19:33:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:41.635 19:33:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:41.635 19:33:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.635 19:33:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 ************************************ 00:13:41.635 START TEST raid_read_error_test 00:13:41.635 ************************************ 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v511k3GqAj 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69157 00:13:41.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69157 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69157 ']' 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.635 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.636 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.636 19:33:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.636 [2024-12-05 19:33:34.958190] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:41.636 [2024-12-05 19:33:34.958365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69157 ] 00:13:41.903 [2024-12-05 19:33:35.145925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.903 [2024-12-05 19:33:35.277894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.167 [2024-12-05 19:33:35.483874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.167 [2024-12-05 19:33:35.484068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 BaseBdev1_malloc 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 true 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 [2024-12-05 19:33:35.982068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:42.735 [2024-12-05 19:33:35.982139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.735 [2024-12-05 19:33:35.982170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:42.735 [2024-12-05 19:33:35.982187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.735 [2024-12-05 19:33:35.985267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.735 [2024-12-05 19:33:35.985320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:42.735 BaseBdev1 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 19:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 BaseBdev2_malloc 00:13:42.735 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:42.735 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 true 00:13:42.735 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 [2024-12-05 19:33:36.040255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:42.736 [2024-12-05 19:33:36.040468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.736 [2024-12-05 19:33:36.040506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:42.736 [2024-12-05 19:33:36.040524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.736 [2024-12-05 19:33:36.043407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.736 [2024-12-05 19:33:36.043459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:42.736 BaseBdev2 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 BaseBdev3_malloc 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 true 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 [2024-12-05 19:33:36.106894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:42.736 [2024-12-05 19:33:36.106963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.736 [2024-12-05 19:33:36.106992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:42.736 [2024-12-05 19:33:36.107009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.736 [2024-12-05 19:33:36.109853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.736 [2024-12-05 19:33:36.109905] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:42.736 BaseBdev3 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 [2024-12-05 19:33:36.118986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.736 [2024-12-05 19:33:36.121559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.736 [2024-12-05 19:33:36.121663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.736 [2024-12-05 19:33:36.122036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.736 [2024-12-05 19:33:36.122057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.736 [2024-12-05 19:33:36.122399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:42.736 [2024-12-05 19:33:36.122635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.736 [2024-12-05 19:33:36.122661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:42.736 [2024-12-05 19:33:36.122954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.736 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.995 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.995 "name": "raid_bdev1", 00:13:42.995 "uuid": "00ae092e-d254-491d-b20e-48068232dd02", 00:13:42.995 "strip_size_kb": 0, 00:13:42.995 "state": "online", 00:13:42.995 "raid_level": "raid1", 00:13:42.995 "superblock": true, 00:13:42.995 "num_base_bdevs": 3, 00:13:42.995 "num_base_bdevs_discovered": 3, 00:13:42.995 "num_base_bdevs_operational": 3, 00:13:42.995 "base_bdevs_list": [ 00:13:42.995 { 00:13:42.995 "name": "BaseBdev1", 00:13:42.995 "uuid": "6929f8f9-cac4-5eb9-92dd-c34055337563", 00:13:42.995 "is_configured": true, 00:13:42.995 "data_offset": 2048, 00:13:42.995 "data_size": 63488 00:13:42.995 }, 00:13:42.995 { 00:13:42.995 "name": "BaseBdev2", 00:13:42.995 "uuid": "75aa6fc9-42e5-5dda-9771-48e3c3d58a1f", 00:13:42.995 "is_configured": true, 00:13:42.995 "data_offset": 2048, 00:13:42.995 "data_size": 63488 00:13:42.995 }, 00:13:42.995 { 00:13:42.995 "name": "BaseBdev3", 00:13:42.995 "uuid": "36ffa88e-56e6-5940-b01f-7f1cad742edb", 00:13:42.995 "is_configured": true, 00:13:42.995 "data_offset": 2048, 00:13:42.995 "data_size": 63488 00:13:42.995 } 00:13:42.996 ] 00:13:42.996 }' 00:13:42.996 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.996 19:33:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.254 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:43.254 19:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:43.513 [2024-12-05 19:33:36.764614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.452 "name": "raid_bdev1", 00:13:44.452 "uuid": "00ae092e-d254-491d-b20e-48068232dd02", 00:13:44.452 "strip_size_kb": 0, 00:13:44.452 "state": "online", 00:13:44.452 "raid_level": "raid1", 00:13:44.452 "superblock": true, 00:13:44.452 "num_base_bdevs": 3, 00:13:44.452 "num_base_bdevs_discovered": 3, 00:13:44.452 "num_base_bdevs_operational": 3, 00:13:44.452 "base_bdevs_list": [ 00:13:44.452 { 00:13:44.452 "name": "BaseBdev1", 00:13:44.452 "uuid": "6929f8f9-cac4-5eb9-92dd-c34055337563", 00:13:44.452 "is_configured": true, 00:13:44.452 "data_offset": 2048, 00:13:44.452 "data_size": 63488 00:13:44.452 }, 00:13:44.452 { 00:13:44.452 "name": "BaseBdev2", 00:13:44.452 "uuid": "75aa6fc9-42e5-5dda-9771-48e3c3d58a1f", 00:13:44.452 "is_configured": true, 00:13:44.452 "data_offset": 2048, 00:13:44.452 "data_size": 63488 00:13:44.452 }, 00:13:44.452 { 00:13:44.452 "name": "BaseBdev3", 00:13:44.452 "uuid": "36ffa88e-56e6-5940-b01f-7f1cad742edb", 00:13:44.452 "is_configured": true, 00:13:44.452 "data_offset": 2048, 00:13:44.452 "data_size": 63488 00:13:44.452 } 00:13:44.452 ] 00:13:44.452 }' 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.452 19:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.021 [2024-12-05 19:33:38.182546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.021 [2024-12-05 19:33:38.182581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.021 [2024-12-05 19:33:38.186260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.021 [2024-12-05 19:33:38.186332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.021 [2024-12-05 19:33:38.186558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.021 [2024-12-05 19:33:38.186579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:45.021 { 00:13:45.021 "results": [ 00:13:45.021 { 00:13:45.021 "job": "raid_bdev1", 00:13:45.021 "core_mask": "0x1", 00:13:45.021 "workload": "randrw", 00:13:45.021 "percentage": 50, 00:13:45.021 "status": "finished", 00:13:45.021 "queue_depth": 1, 00:13:45.021 "io_size": 131072, 00:13:45.021 "runtime": 1.415308, 00:13:45.021 "iops": 8983.203656023989, 00:13:45.021 "mibps": 1122.9004570029986, 00:13:45.021 "io_failed": 0, 00:13:45.021 "io_timeout": 0, 00:13:45.021 "avg_latency_us": 106.792695239321, 00:13:45.021 "min_latency_us": 42.123636363636365, 00:13:45.021 "max_latency_us": 1995.8690909090908 00:13:45.021 } 00:13:45.021 ], 00:13:45.021 "core_count": 1 00:13:45.021 } 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69157 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69157 ']' 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69157 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69157 00:13:45.021 killing process with pid 69157 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69157' 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69157 00:13:45.021 [2024-12-05 19:33:38.225641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.021 19:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69157 00:13:45.021 [2024-12-05 19:33:38.430045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v511k3GqAj 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:46.400 00:13:46.400 real 0m4.714s 00:13:46.400 user 0m5.851s 00:13:46.400 sys 0m0.573s 00:13:46.400 ************************************ 00:13:46.400 END TEST raid_read_error_test 00:13:46.400 ************************************ 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.400 19:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.400 19:33:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:46.400 19:33:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:46.400 19:33:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.400 19:33:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.400 ************************************ 00:13:46.400 START TEST raid_write_error_test 00:13:46.400 ************************************ 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:46.400 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Nd0M4DhgGI 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69303 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69303 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69303 ']' 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.401 19:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.401 [2024-12-05 19:33:39.732910] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:46.401 [2024-12-05 19:33:39.733090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:13:46.755 [2024-12-05 19:33:39.924857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.755 [2024-12-05 19:33:40.071013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.013 [2024-12-05 19:33:40.294977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.013 [2024-12-05 19:33:40.295045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 BaseBdev1_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 true 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 [2024-12-05 19:33:40.789155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:47.579 [2024-12-05 19:33:40.789255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.579 [2024-12-05 19:33:40.789286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:47.579 [2024-12-05 19:33:40.789306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.579 [2024-12-05 19:33:40.793218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.579 [2024-12-05 19:33:40.793287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.579 BaseBdev1 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 BaseBdev2_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 true 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 [2024-12-05 19:33:40.866454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:47.579 [2024-12-05 19:33:40.866523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.579 [2024-12-05 19:33:40.866580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:47.579 [2024-12-05 19:33:40.866598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.579 [2024-12-05 19:33:40.869586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.579 [2024-12-05 19:33:40.869651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.579 BaseBdev2 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 BaseBdev3_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 true 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.579 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.580 [2024-12-05 19:33:40.932776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:47.580 [2024-12-05 19:33:40.932841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.580 [2024-12-05 19:33:40.932870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:47.580 [2024-12-05 19:33:40.932889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.580 [2024-12-05 19:33:40.935703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.580 [2024-12-05 19:33:40.935760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.580 BaseBdev3 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.580 [2024-12-05 19:33:40.940872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.580 [2024-12-05 19:33:40.943405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.580 [2024-12-05 19:33:40.943502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.580 [2024-12-05 19:33:40.943801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:47.580 [2024-12-05 19:33:40.943820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.580 [2024-12-05 19:33:40.944132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:47.580 [2024-12-05 19:33:40.944491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:47.580 [2024-12-05 19:33:40.944517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:47.580 [2024-12-05 19:33:40.944767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.580 "name": "raid_bdev1", 00:13:47.580 "uuid": "393ce2b8-1502-466c-ada9-2ff07593f6b8", 00:13:47.580 "strip_size_kb": 0, 00:13:47.580 "state": "online", 00:13:47.580 "raid_level": "raid1", 00:13:47.580 "superblock": true, 00:13:47.580 "num_base_bdevs": 3, 00:13:47.580 "num_base_bdevs_discovered": 3, 00:13:47.580 "num_base_bdevs_operational": 3, 00:13:47.580 "base_bdevs_list": [ 00:13:47.580 { 00:13:47.580 "name": "BaseBdev1", 00:13:47.580 "uuid": "a3b8d764-b2d8-5c11-b599-de7145f4a766", 00:13:47.580 "is_configured": true, 00:13:47.580 "data_offset": 2048, 00:13:47.580 "data_size": 63488 00:13:47.580 }, 00:13:47.580 { 00:13:47.580 "name": "BaseBdev2", 00:13:47.580 "uuid": "e7873758-c53d-5462-a32c-b29a53c5dc81", 00:13:47.580 "is_configured": true, 00:13:47.580 "data_offset": 2048, 00:13:47.580 "data_size": 63488 00:13:47.580 }, 00:13:47.580 { 00:13:47.580 "name": "BaseBdev3", 00:13:47.580 "uuid": "ac1eb3ce-d71b-58ae-855d-a0bd561fdfbe", 00:13:47.580 "is_configured": true, 00:13:47.580 "data_offset": 2048, 00:13:47.580 "data_size": 63488 00:13:47.580 } 00:13:47.580 ] 00:13:47.580 }' 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.580 19:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.145 19:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:48.145 19:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.145 [2024-12-05 19:33:41.570489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.080 [2024-12-05 19:33:42.450973] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:49.080 [2024-12-05 19:33:42.451029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.080 [2024-12-05 19:33:42.451281] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.080 "name": "raid_bdev1", 00:13:49.080 "uuid": "393ce2b8-1502-466c-ada9-2ff07593f6b8", 00:13:49.080 "strip_size_kb": 0, 00:13:49.080 "state": "online", 00:13:49.080 "raid_level": "raid1", 00:13:49.080 "superblock": true, 00:13:49.080 "num_base_bdevs": 3, 00:13:49.080 "num_base_bdevs_discovered": 2, 00:13:49.080 "num_base_bdevs_operational": 2, 00:13:49.080 "base_bdevs_list": [ 00:13:49.080 { 00:13:49.080 "name": null, 00:13:49.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.080 "is_configured": false, 00:13:49.080 "data_offset": 0, 00:13:49.080 "data_size": 63488 00:13:49.080 }, 00:13:49.080 { 00:13:49.080 "name": "BaseBdev2", 00:13:49.080 "uuid": "e7873758-c53d-5462-a32c-b29a53c5dc81", 00:13:49.080 "is_configured": true, 00:13:49.080 "data_offset": 2048, 00:13:49.080 "data_size": 63488 00:13:49.080 }, 00:13:49.080 { 00:13:49.080 "name": "BaseBdev3", 00:13:49.080 "uuid": "ac1eb3ce-d71b-58ae-855d-a0bd561fdfbe", 00:13:49.080 "is_configured": true, 00:13:49.080 "data_offset": 2048, 00:13:49.080 "data_size": 63488 00:13:49.080 } 00:13:49.080 ] 00:13:49.080 }' 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.080 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.647 [2024-12-05 19:33:42.980568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.647 [2024-12-05 19:33:42.980605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.647 [2024-12-05 19:33:42.984058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.647 [2024-12-05 19:33:42.984140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.647 [2024-12-05 19:33:42.984251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.647 [2024-12-05 19:33:42.984274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:49.647 { 00:13:49.647 "results": [ 00:13:49.647 { 00:13:49.647 "job": "raid_bdev1", 00:13:49.647 "core_mask": "0x1", 00:13:49.647 "workload": "randrw", 00:13:49.647 "percentage": 50, 00:13:49.647 "status": "finished", 00:13:49.647 "queue_depth": 1, 00:13:49.647 "io_size": 131072, 00:13:49.647 "runtime": 1.407764, 00:13:49.647 "iops": 10619.677730074074, 00:13:49.647 "mibps": 1327.4597162592593, 00:13:49.647 "io_failed": 0, 00:13:49.647 "io_timeout": 0, 00:13:49.647 "avg_latency_us": 89.97176430525997, 00:13:49.647 "min_latency_us": 38.63272727272727, 00:13:49.647 "max_latency_us": 1839.4763636363637 00:13:49.647 } 00:13:49.647 ], 00:13:49.647 "core_count": 1 00:13:49.647 } 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69303 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69303 ']' 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69303 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.647 19:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69303 00:13:49.647 killing process with pid 69303 00:13:49.647 19:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.647 19:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.647 19:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69303' 00:13:49.647 19:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69303 00:13:49.647 [2024-12-05 19:33:43.024072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.647 19:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69303 00:13:49.905 [2024-12-05 19:33:43.229489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.864 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Nd0M4DhgGI 00:13:50.864 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:50.864 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:51.122 00:13:51.122 real 0m4.704s 00:13:51.122 user 0m5.803s 00:13:51.122 sys 0m0.632s 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.122 ************************************ 00:13:51.122 END TEST raid_write_error_test 00:13:51.122 ************************************ 00:13:51.122 19:33:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.122 19:33:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:51.122 19:33:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:51.122 19:33:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:51.122 19:33:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:51.122 19:33:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.122 19:33:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.122 ************************************ 00:13:51.122 START TEST raid_state_function_test 00:13:51.122 ************************************ 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:51.122 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:51.123 Process raid pid: 69446 00:13:51.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69446 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69446' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69446 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69446 ']' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.123 19:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.123 [2024-12-05 19:33:44.485557] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:13:51.123 [2024-12-05 19:33:44.486073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.381 [2024-12-05 19:33:44.678816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.640 [2024-12-05 19:33:44.854082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.640 [2024-12-05 19:33:45.057371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.640 [2024-12-05 19:33:45.057411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.214 [2024-12-05 19:33:45.495610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.214 [2024-12-05 19:33:45.495736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.214 [2024-12-05 19:33:45.495756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.214 [2024-12-05 19:33:45.495773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.214 [2024-12-05 19:33:45.495783] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.214 [2024-12-05 19:33:45.495797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.214 [2024-12-05 19:33:45.495806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.214 [2024-12-05 19:33:45.495820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.214 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.215 "name": "Existed_Raid", 00:13:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.215 "strip_size_kb": 64, 00:13:52.215 "state": "configuring", 00:13:52.215 "raid_level": "raid0", 00:13:52.215 "superblock": false, 00:13:52.215 "num_base_bdevs": 4, 00:13:52.215 "num_base_bdevs_discovered": 0, 00:13:52.215 "num_base_bdevs_operational": 4, 00:13:52.215 "base_bdevs_list": [ 00:13:52.215 { 00:13:52.215 "name": "BaseBdev1", 00:13:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.215 "is_configured": false, 00:13:52.215 "data_offset": 0, 00:13:52.215 "data_size": 0 00:13:52.215 }, 00:13:52.215 { 00:13:52.215 "name": "BaseBdev2", 00:13:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.215 "is_configured": false, 00:13:52.215 "data_offset": 0, 00:13:52.215 "data_size": 0 00:13:52.215 }, 00:13:52.215 { 00:13:52.215 "name": "BaseBdev3", 00:13:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.215 "is_configured": false, 00:13:52.215 "data_offset": 0, 00:13:52.215 "data_size": 0 00:13:52.215 }, 00:13:52.215 { 00:13:52.215 "name": "BaseBdev4", 00:13:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.215 "is_configured": false, 00:13:52.215 "data_offset": 0, 00:13:52.215 "data_size": 0 00:13:52.215 } 00:13:52.215 ] 00:13:52.215 }' 00:13:52.215 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.215 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.780 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.780 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 [2024-12-05 19:33:45.995745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.780 [2024-12-05 19:33:45.995794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:52.780 19:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.780 19:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 [2024-12-05 19:33:46.003744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.780 [2024-12-05 19:33:46.003799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.780 [2024-12-05 19:33:46.003814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.780 [2024-12-05 19:33:46.003830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.780 [2024-12-05 19:33:46.003840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.780 [2024-12-05 19:33:46.003854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.780 [2024-12-05 19:33:46.003864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.780 [2024-12-05 19:33:46.003877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 [2024-12-05 19:33:46.047781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.780 BaseBdev1 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.780 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.780 [ 00:13:52.780 { 00:13:52.780 "name": "BaseBdev1", 00:13:52.780 "aliases": [ 00:13:52.780 "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128" 00:13:52.780 ], 00:13:52.780 "product_name": "Malloc disk", 00:13:52.780 "block_size": 512, 00:13:52.780 "num_blocks": 65536, 00:13:52.780 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:52.780 "assigned_rate_limits": { 00:13:52.780 "rw_ios_per_sec": 0, 00:13:52.780 "rw_mbytes_per_sec": 0, 00:13:52.780 "r_mbytes_per_sec": 0, 00:13:52.781 "w_mbytes_per_sec": 0 00:13:52.781 }, 00:13:52.781 "claimed": true, 00:13:52.781 "claim_type": "exclusive_write", 00:13:52.781 "zoned": false, 00:13:52.781 "supported_io_types": { 00:13:52.781 "read": true, 00:13:52.781 "write": true, 00:13:52.781 "unmap": true, 00:13:52.781 "flush": true, 00:13:52.781 "reset": true, 00:13:52.781 "nvme_admin": false, 00:13:52.781 "nvme_io": false, 00:13:52.781 "nvme_io_md": false, 00:13:52.781 "write_zeroes": true, 00:13:52.781 "zcopy": true, 00:13:52.781 "get_zone_info": false, 00:13:52.781 "zone_management": false, 00:13:52.781 "zone_append": false, 00:13:52.781 "compare": false, 00:13:52.781 "compare_and_write": false, 00:13:52.781 "abort": true, 00:13:52.781 "seek_hole": false, 00:13:52.781 "seek_data": false, 00:13:52.781 "copy": true, 00:13:52.781 "nvme_iov_md": false 00:13:52.781 }, 00:13:52.781 "memory_domains": [ 00:13:52.781 { 00:13:52.781 "dma_device_id": "system", 00:13:52.781 "dma_device_type": 1 00:13:52.781 }, 00:13:52.781 { 00:13:52.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.781 "dma_device_type": 2 00:13:52.781 } 00:13:52.781 ], 00:13:52.781 "driver_specific": {} 00:13:52.781 } 00:13:52.781 ] 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.781 "name": "Existed_Raid", 00:13:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.781 "strip_size_kb": 64, 00:13:52.781 "state": "configuring", 00:13:52.781 "raid_level": "raid0", 00:13:52.781 "superblock": false, 00:13:52.781 "num_base_bdevs": 4, 00:13:52.781 "num_base_bdevs_discovered": 1, 00:13:52.781 "num_base_bdevs_operational": 4, 00:13:52.781 "base_bdevs_list": [ 00:13:52.781 { 00:13:52.781 "name": "BaseBdev1", 00:13:52.781 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:52.781 "is_configured": true, 00:13:52.781 "data_offset": 0, 00:13:52.781 "data_size": 65536 00:13:52.781 }, 00:13:52.781 { 00:13:52.781 "name": "BaseBdev2", 00:13:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.781 "is_configured": false, 00:13:52.781 "data_offset": 0, 00:13:52.781 "data_size": 0 00:13:52.781 }, 00:13:52.781 { 00:13:52.781 "name": "BaseBdev3", 00:13:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.781 "is_configured": false, 00:13:52.781 "data_offset": 0, 00:13:52.781 "data_size": 0 00:13:52.781 }, 00:13:52.781 { 00:13:52.781 "name": "BaseBdev4", 00:13:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.781 "is_configured": false, 00:13:52.781 "data_offset": 0, 00:13:52.781 "data_size": 0 00:13:52.781 } 00:13:52.781 ] 00:13:52.781 }' 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.781 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.348 [2024-12-05 19:33:46.628012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.348 [2024-12-05 19:33:46.628298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.348 [2024-12-05 19:33:46.636078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.348 [2024-12-05 19:33:46.638606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.348 [2024-12-05 19:33:46.638840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.348 [2024-12-05 19:33:46.638869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.348 [2024-12-05 19:33:46.638888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.348 [2024-12-05 19:33:46.638899] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:53.348 [2024-12-05 19:33:46.638912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.348 "name": "Existed_Raid", 00:13:53.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.348 "strip_size_kb": 64, 00:13:53.348 "state": "configuring", 00:13:53.348 "raid_level": "raid0", 00:13:53.348 "superblock": false, 00:13:53.348 "num_base_bdevs": 4, 00:13:53.348 "num_base_bdevs_discovered": 1, 00:13:53.348 "num_base_bdevs_operational": 4, 00:13:53.348 "base_bdevs_list": [ 00:13:53.348 { 00:13:53.348 "name": "BaseBdev1", 00:13:53.348 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:53.348 "is_configured": true, 00:13:53.348 "data_offset": 0, 00:13:53.348 "data_size": 65536 00:13:53.348 }, 00:13:53.348 { 00:13:53.348 "name": "BaseBdev2", 00:13:53.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.348 "is_configured": false, 00:13:53.348 "data_offset": 0, 00:13:53.348 "data_size": 0 00:13:53.348 }, 00:13:53.348 { 00:13:53.348 "name": "BaseBdev3", 00:13:53.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.348 "is_configured": false, 00:13:53.348 "data_offset": 0, 00:13:53.348 "data_size": 0 00:13:53.348 }, 00:13:53.348 { 00:13:53.348 "name": "BaseBdev4", 00:13:53.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.348 "is_configured": false, 00:13:53.348 "data_offset": 0, 00:13:53.348 "data_size": 0 00:13:53.348 } 00:13:53.348 ] 00:13:53.348 }' 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.348 19:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.918 [2024-12-05 19:33:47.195853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.918 BaseBdev2 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.918 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.918 [ 00:13:53.918 { 00:13:53.918 "name": "BaseBdev2", 00:13:53.918 "aliases": [ 00:13:53.918 "1c299229-ca15-465e-b15f-efcfafce0547" 00:13:53.918 ], 00:13:53.918 "product_name": "Malloc disk", 00:13:53.918 "block_size": 512, 00:13:53.918 "num_blocks": 65536, 00:13:53.918 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:53.918 "assigned_rate_limits": { 00:13:53.918 "rw_ios_per_sec": 0, 00:13:53.918 "rw_mbytes_per_sec": 0, 00:13:53.918 "r_mbytes_per_sec": 0, 00:13:53.918 "w_mbytes_per_sec": 0 00:13:53.918 }, 00:13:53.918 "claimed": true, 00:13:53.918 "claim_type": "exclusive_write", 00:13:53.918 "zoned": false, 00:13:53.918 "supported_io_types": { 00:13:53.919 "read": true, 00:13:53.919 "write": true, 00:13:53.919 "unmap": true, 00:13:53.919 "flush": true, 00:13:53.919 "reset": true, 00:13:53.919 "nvme_admin": false, 00:13:53.919 "nvme_io": false, 00:13:53.919 "nvme_io_md": false, 00:13:53.919 "write_zeroes": true, 00:13:53.919 "zcopy": true, 00:13:53.919 "get_zone_info": false, 00:13:53.919 "zone_management": false, 00:13:53.919 "zone_append": false, 00:13:53.919 "compare": false, 00:13:53.919 "compare_and_write": false, 00:13:53.919 "abort": true, 00:13:53.919 "seek_hole": false, 00:13:53.919 "seek_data": false, 00:13:53.919 "copy": true, 00:13:53.919 "nvme_iov_md": false 00:13:53.919 }, 00:13:53.919 "memory_domains": [ 00:13:53.919 { 00:13:53.919 "dma_device_id": "system", 00:13:53.919 "dma_device_type": 1 00:13:53.919 }, 00:13:53.919 { 00:13:53.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.919 "dma_device_type": 2 00:13:53.919 } 00:13:53.919 ], 00:13:53.919 "driver_specific": {} 00:13:53.919 } 00:13:53.919 ] 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.919 "name": "Existed_Raid", 00:13:53.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.919 "strip_size_kb": 64, 00:13:53.919 "state": "configuring", 00:13:53.919 "raid_level": "raid0", 00:13:53.919 "superblock": false, 00:13:53.919 "num_base_bdevs": 4, 00:13:53.919 "num_base_bdevs_discovered": 2, 00:13:53.919 "num_base_bdevs_operational": 4, 00:13:53.919 "base_bdevs_list": [ 00:13:53.919 { 00:13:53.919 "name": "BaseBdev1", 00:13:53.919 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:53.919 "is_configured": true, 00:13:53.919 "data_offset": 0, 00:13:53.919 "data_size": 65536 00:13:53.919 }, 00:13:53.919 { 00:13:53.919 "name": "BaseBdev2", 00:13:53.919 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:53.919 "is_configured": true, 00:13:53.919 "data_offset": 0, 00:13:53.919 "data_size": 65536 00:13:53.919 }, 00:13:53.919 { 00:13:53.919 "name": "BaseBdev3", 00:13:53.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.919 "is_configured": false, 00:13:53.919 "data_offset": 0, 00:13:53.919 "data_size": 0 00:13:53.919 }, 00:13:53.919 { 00:13:53.919 "name": "BaseBdev4", 00:13:53.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.919 "is_configured": false, 00:13:53.919 "data_offset": 0, 00:13:53.919 "data_size": 0 00:13:53.919 } 00:13:53.919 ] 00:13:53.919 }' 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.919 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.535 [2024-12-05 19:33:47.807972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.535 BaseBdev3 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.535 [ 00:13:54.535 { 00:13:54.535 "name": "BaseBdev3", 00:13:54.535 "aliases": [ 00:13:54.535 "8f48cf34-94ff-4a05-9593-67856924c042" 00:13:54.535 ], 00:13:54.535 "product_name": "Malloc disk", 00:13:54.535 "block_size": 512, 00:13:54.535 "num_blocks": 65536, 00:13:54.535 "uuid": "8f48cf34-94ff-4a05-9593-67856924c042", 00:13:54.535 "assigned_rate_limits": { 00:13:54.535 "rw_ios_per_sec": 0, 00:13:54.535 "rw_mbytes_per_sec": 0, 00:13:54.535 "r_mbytes_per_sec": 0, 00:13:54.535 "w_mbytes_per_sec": 0 00:13:54.535 }, 00:13:54.535 "claimed": true, 00:13:54.535 "claim_type": "exclusive_write", 00:13:54.535 "zoned": false, 00:13:54.535 "supported_io_types": { 00:13:54.535 "read": true, 00:13:54.535 "write": true, 00:13:54.535 "unmap": true, 00:13:54.535 "flush": true, 00:13:54.535 "reset": true, 00:13:54.535 "nvme_admin": false, 00:13:54.535 "nvme_io": false, 00:13:54.535 "nvme_io_md": false, 00:13:54.535 "write_zeroes": true, 00:13:54.535 "zcopy": true, 00:13:54.535 "get_zone_info": false, 00:13:54.535 "zone_management": false, 00:13:54.535 "zone_append": false, 00:13:54.535 "compare": false, 00:13:54.535 "compare_and_write": false, 00:13:54.535 "abort": true, 00:13:54.535 "seek_hole": false, 00:13:54.535 "seek_data": false, 00:13:54.535 "copy": true, 00:13:54.535 "nvme_iov_md": false 00:13:54.535 }, 00:13:54.535 "memory_domains": [ 00:13:54.535 { 00:13:54.535 "dma_device_id": "system", 00:13:54.535 "dma_device_type": 1 00:13:54.535 }, 00:13:54.535 { 00:13:54.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.535 "dma_device_type": 2 00:13:54.535 } 00:13:54.535 ], 00:13:54.535 "driver_specific": {} 00:13:54.535 } 00:13:54.535 ] 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.535 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.536 "name": "Existed_Raid", 00:13:54.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.536 "strip_size_kb": 64, 00:13:54.536 "state": "configuring", 00:13:54.536 "raid_level": "raid0", 00:13:54.536 "superblock": false, 00:13:54.536 "num_base_bdevs": 4, 00:13:54.536 "num_base_bdevs_discovered": 3, 00:13:54.536 "num_base_bdevs_operational": 4, 00:13:54.536 "base_bdevs_list": [ 00:13:54.536 { 00:13:54.536 "name": "BaseBdev1", 00:13:54.536 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:54.536 "is_configured": true, 00:13:54.536 "data_offset": 0, 00:13:54.536 "data_size": 65536 00:13:54.536 }, 00:13:54.536 { 00:13:54.536 "name": "BaseBdev2", 00:13:54.536 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:54.536 "is_configured": true, 00:13:54.536 "data_offset": 0, 00:13:54.536 "data_size": 65536 00:13:54.536 }, 00:13:54.536 { 00:13:54.536 "name": "BaseBdev3", 00:13:54.536 "uuid": "8f48cf34-94ff-4a05-9593-67856924c042", 00:13:54.536 "is_configured": true, 00:13:54.536 "data_offset": 0, 00:13:54.536 "data_size": 65536 00:13:54.536 }, 00:13:54.536 { 00:13:54.536 "name": "BaseBdev4", 00:13:54.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.536 "is_configured": false, 00:13:54.536 "data_offset": 0, 00:13:54.536 "data_size": 0 00:13:54.536 } 00:13:54.536 ] 00:13:54.536 }' 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.536 19:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 [2024-12-05 19:33:48.411782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.104 [2024-12-05 19:33:48.412041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:55.104 [2024-12-05 19:33:48.412067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:55.104 [2024-12-05 19:33:48.412448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:55.104 [2024-12-05 19:33:48.412717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:55.104 [2024-12-05 19:33:48.412736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:55.104 [2024-12-05 19:33:48.413091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.104 BaseBdev4 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.104 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 [ 00:13:55.104 { 00:13:55.104 "name": "BaseBdev4", 00:13:55.104 "aliases": [ 00:13:55.104 "65a82ac4-0cbc-47b5-ba42-9f710440315b" 00:13:55.104 ], 00:13:55.104 "product_name": "Malloc disk", 00:13:55.104 "block_size": 512, 00:13:55.104 "num_blocks": 65536, 00:13:55.104 "uuid": "65a82ac4-0cbc-47b5-ba42-9f710440315b", 00:13:55.104 "assigned_rate_limits": { 00:13:55.104 "rw_ios_per_sec": 0, 00:13:55.104 "rw_mbytes_per_sec": 0, 00:13:55.104 "r_mbytes_per_sec": 0, 00:13:55.104 "w_mbytes_per_sec": 0 00:13:55.104 }, 00:13:55.104 "claimed": true, 00:13:55.104 "claim_type": "exclusive_write", 00:13:55.104 "zoned": false, 00:13:55.104 "supported_io_types": { 00:13:55.104 "read": true, 00:13:55.104 "write": true, 00:13:55.104 "unmap": true, 00:13:55.104 "flush": true, 00:13:55.104 "reset": true, 00:13:55.104 "nvme_admin": false, 00:13:55.104 "nvme_io": false, 00:13:55.104 "nvme_io_md": false, 00:13:55.104 "write_zeroes": true, 00:13:55.104 "zcopy": true, 00:13:55.104 "get_zone_info": false, 00:13:55.104 "zone_management": false, 00:13:55.104 "zone_append": false, 00:13:55.104 "compare": false, 00:13:55.104 "compare_and_write": false, 00:13:55.104 "abort": true, 00:13:55.104 "seek_hole": false, 00:13:55.105 "seek_data": false, 00:13:55.105 "copy": true, 00:13:55.105 "nvme_iov_md": false 00:13:55.105 }, 00:13:55.105 "memory_domains": [ 00:13:55.105 { 00:13:55.105 "dma_device_id": "system", 00:13:55.105 "dma_device_type": 1 00:13:55.105 }, 00:13:55.105 { 00:13:55.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.105 "dma_device_type": 2 00:13:55.105 } 00:13:55.105 ], 00:13:55.105 "driver_specific": {} 00:13:55.105 } 00:13:55.105 ] 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.105 "name": "Existed_Raid", 00:13:55.105 "uuid": "d3773736-c6b8-4c10-bddb-e60acd818fd6", 00:13:55.105 "strip_size_kb": 64, 00:13:55.105 "state": "online", 00:13:55.105 "raid_level": "raid0", 00:13:55.105 "superblock": false, 00:13:55.105 "num_base_bdevs": 4, 00:13:55.105 "num_base_bdevs_discovered": 4, 00:13:55.105 "num_base_bdevs_operational": 4, 00:13:55.105 "base_bdevs_list": [ 00:13:55.105 { 00:13:55.105 "name": "BaseBdev1", 00:13:55.105 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:55.105 "is_configured": true, 00:13:55.105 "data_offset": 0, 00:13:55.105 "data_size": 65536 00:13:55.105 }, 00:13:55.105 { 00:13:55.105 "name": "BaseBdev2", 00:13:55.105 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:55.105 "is_configured": true, 00:13:55.105 "data_offset": 0, 00:13:55.105 "data_size": 65536 00:13:55.105 }, 00:13:55.105 { 00:13:55.105 "name": "BaseBdev3", 00:13:55.105 "uuid": "8f48cf34-94ff-4a05-9593-67856924c042", 00:13:55.105 "is_configured": true, 00:13:55.105 "data_offset": 0, 00:13:55.105 "data_size": 65536 00:13:55.105 }, 00:13:55.105 { 00:13:55.105 "name": "BaseBdev4", 00:13:55.105 "uuid": "65a82ac4-0cbc-47b5-ba42-9f710440315b", 00:13:55.105 "is_configured": true, 00:13:55.105 "data_offset": 0, 00:13:55.105 "data_size": 65536 00:13:55.105 } 00:13:55.105 ] 00:13:55.105 }' 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.105 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.673 19:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.673 [2024-12-05 19:33:48.980468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.673 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.674 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.674 "name": "Existed_Raid", 00:13:55.674 "aliases": [ 00:13:55.674 "d3773736-c6b8-4c10-bddb-e60acd818fd6" 00:13:55.674 ], 00:13:55.674 "product_name": "Raid Volume", 00:13:55.674 "block_size": 512, 00:13:55.674 "num_blocks": 262144, 00:13:55.674 "uuid": "d3773736-c6b8-4c10-bddb-e60acd818fd6", 00:13:55.674 "assigned_rate_limits": { 00:13:55.674 "rw_ios_per_sec": 0, 00:13:55.674 "rw_mbytes_per_sec": 0, 00:13:55.674 "r_mbytes_per_sec": 0, 00:13:55.674 "w_mbytes_per_sec": 0 00:13:55.674 }, 00:13:55.674 "claimed": false, 00:13:55.674 "zoned": false, 00:13:55.674 "supported_io_types": { 00:13:55.674 "read": true, 00:13:55.674 "write": true, 00:13:55.674 "unmap": true, 00:13:55.674 "flush": true, 00:13:55.674 "reset": true, 00:13:55.674 "nvme_admin": false, 00:13:55.674 "nvme_io": false, 00:13:55.674 "nvme_io_md": false, 00:13:55.674 "write_zeroes": true, 00:13:55.674 "zcopy": false, 00:13:55.674 "get_zone_info": false, 00:13:55.674 "zone_management": false, 00:13:55.674 "zone_append": false, 00:13:55.674 "compare": false, 00:13:55.674 "compare_and_write": false, 00:13:55.674 "abort": false, 00:13:55.674 "seek_hole": false, 00:13:55.674 "seek_data": false, 00:13:55.674 "copy": false, 00:13:55.674 "nvme_iov_md": false 00:13:55.674 }, 00:13:55.674 "memory_domains": [ 00:13:55.674 { 00:13:55.674 "dma_device_id": "system", 00:13:55.674 "dma_device_type": 1 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.674 "dma_device_type": 2 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "system", 00:13:55.674 "dma_device_type": 1 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.674 "dma_device_type": 2 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "system", 00:13:55.674 "dma_device_type": 1 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.674 "dma_device_type": 2 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "system", 00:13:55.674 "dma_device_type": 1 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.674 "dma_device_type": 2 00:13:55.674 } 00:13:55.674 ], 00:13:55.674 "driver_specific": { 00:13:55.674 "raid": { 00:13:55.674 "uuid": "d3773736-c6b8-4c10-bddb-e60acd818fd6", 00:13:55.674 "strip_size_kb": 64, 00:13:55.674 "state": "online", 00:13:55.674 "raid_level": "raid0", 00:13:55.674 "superblock": false, 00:13:55.674 "num_base_bdevs": 4, 00:13:55.674 "num_base_bdevs_discovered": 4, 00:13:55.674 "num_base_bdevs_operational": 4, 00:13:55.674 "base_bdevs_list": [ 00:13:55.674 { 00:13:55.674 "name": "BaseBdev1", 00:13:55.674 "uuid": "fa78a8f2-b6a6-4ce3-92cf-eff3f7f21128", 00:13:55.674 "is_configured": true, 00:13:55.674 "data_offset": 0, 00:13:55.674 "data_size": 65536 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "name": "BaseBdev2", 00:13:55.674 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:55.674 "is_configured": true, 00:13:55.674 "data_offset": 0, 00:13:55.674 "data_size": 65536 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "name": "BaseBdev3", 00:13:55.674 "uuid": "8f48cf34-94ff-4a05-9593-67856924c042", 00:13:55.674 "is_configured": true, 00:13:55.674 "data_offset": 0, 00:13:55.674 "data_size": 65536 00:13:55.674 }, 00:13:55.674 { 00:13:55.674 "name": "BaseBdev4", 00:13:55.674 "uuid": "65a82ac4-0cbc-47b5-ba42-9f710440315b", 00:13:55.674 "is_configured": true, 00:13:55.674 "data_offset": 0, 00:13:55.674 "data_size": 65536 00:13:55.674 } 00:13:55.674 ] 00:13:55.674 } 00:13:55.674 } 00:13:55.674 }' 00:13:55.674 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.674 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:55.674 BaseBdev2 00:13:55.674 BaseBdev3 00:13:55.674 BaseBdev4' 00:13:55.674 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.933 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.934 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.934 [2024-12-05 19:33:49.348238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.934 [2024-12-05 19:33:49.348455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.934 [2024-12-05 19:33:49.348546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.193 "name": "Existed_Raid", 00:13:56.193 "uuid": "d3773736-c6b8-4c10-bddb-e60acd818fd6", 00:13:56.193 "strip_size_kb": 64, 00:13:56.193 "state": "offline", 00:13:56.193 "raid_level": "raid0", 00:13:56.193 "superblock": false, 00:13:56.193 "num_base_bdevs": 4, 00:13:56.193 "num_base_bdevs_discovered": 3, 00:13:56.193 "num_base_bdevs_operational": 3, 00:13:56.193 "base_bdevs_list": [ 00:13:56.193 { 00:13:56.193 "name": null, 00:13:56.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.193 "is_configured": false, 00:13:56.193 "data_offset": 0, 00:13:56.193 "data_size": 65536 00:13:56.193 }, 00:13:56.193 { 00:13:56.193 "name": "BaseBdev2", 00:13:56.193 "uuid": "1c299229-ca15-465e-b15f-efcfafce0547", 00:13:56.193 "is_configured": true, 00:13:56.193 "data_offset": 0, 00:13:56.193 "data_size": 65536 00:13:56.193 }, 00:13:56.193 { 00:13:56.193 "name": "BaseBdev3", 00:13:56.193 "uuid": "8f48cf34-94ff-4a05-9593-67856924c042", 00:13:56.193 "is_configured": true, 00:13:56.193 "data_offset": 0, 00:13:56.193 "data_size": 65536 00:13:56.193 }, 00:13:56.193 { 00:13:56.193 "name": "BaseBdev4", 00:13:56.193 "uuid": "65a82ac4-0cbc-47b5-ba42-9f710440315b", 00:13:56.193 "is_configured": true, 00:13:56.193 "data_offset": 0, 00:13:56.193 "data_size": 65536 00:13:56.193 } 00:13:56.193 ] 00:13:56.193 }' 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.193 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.760 19:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.760 [2024-12-05 19:33:50.013146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.760 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.760 [2024-12-05 19:33:50.156394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.019 [2024-12-05 19:33:50.293800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:57.019 [2024-12-05 19:33:50.293871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.019 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.020 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.279 BaseBdev2 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.279 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.279 [ 00:13:57.279 { 00:13:57.279 "name": "BaseBdev2", 00:13:57.279 "aliases": [ 00:13:57.279 "ece8f9eb-48ea-46f9-b2ac-0de89b744665" 00:13:57.279 ], 00:13:57.279 "product_name": "Malloc disk", 00:13:57.279 "block_size": 512, 00:13:57.279 "num_blocks": 65536, 00:13:57.279 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:57.279 "assigned_rate_limits": { 00:13:57.279 "rw_ios_per_sec": 0, 00:13:57.279 "rw_mbytes_per_sec": 0, 00:13:57.279 "r_mbytes_per_sec": 0, 00:13:57.279 "w_mbytes_per_sec": 0 00:13:57.279 }, 00:13:57.279 "claimed": false, 00:13:57.279 "zoned": false, 00:13:57.279 "supported_io_types": { 00:13:57.279 "read": true, 00:13:57.279 "write": true, 00:13:57.279 "unmap": true, 00:13:57.279 "flush": true, 00:13:57.279 "reset": true, 00:13:57.279 "nvme_admin": false, 00:13:57.279 "nvme_io": false, 00:13:57.279 "nvme_io_md": false, 00:13:57.279 "write_zeroes": true, 00:13:57.279 "zcopy": true, 00:13:57.279 "get_zone_info": false, 00:13:57.279 "zone_management": false, 00:13:57.279 "zone_append": false, 00:13:57.279 "compare": false, 00:13:57.280 "compare_and_write": false, 00:13:57.280 "abort": true, 00:13:57.280 "seek_hole": false, 00:13:57.280 "seek_data": false, 00:13:57.280 "copy": true, 00:13:57.280 "nvme_iov_md": false 00:13:57.280 }, 00:13:57.280 "memory_domains": [ 00:13:57.280 { 00:13:57.280 "dma_device_id": "system", 00:13:57.280 "dma_device_type": 1 00:13:57.280 }, 00:13:57.280 { 00:13:57.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.280 "dma_device_type": 2 00:13:57.280 } 00:13:57.280 ], 00:13:57.280 "driver_specific": {} 00:13:57.280 } 00:13:57.280 ] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 BaseBdev3 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 [ 00:13:57.280 { 00:13:57.280 "name": "BaseBdev3", 00:13:57.280 "aliases": [ 00:13:57.280 "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e" 00:13:57.280 ], 00:13:57.280 "product_name": "Malloc disk", 00:13:57.280 "block_size": 512, 00:13:57.280 "num_blocks": 65536, 00:13:57.280 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:57.280 "assigned_rate_limits": { 00:13:57.280 "rw_ios_per_sec": 0, 00:13:57.280 "rw_mbytes_per_sec": 0, 00:13:57.280 "r_mbytes_per_sec": 0, 00:13:57.280 "w_mbytes_per_sec": 0 00:13:57.280 }, 00:13:57.280 "claimed": false, 00:13:57.280 "zoned": false, 00:13:57.280 "supported_io_types": { 00:13:57.280 "read": true, 00:13:57.280 "write": true, 00:13:57.280 "unmap": true, 00:13:57.280 "flush": true, 00:13:57.280 "reset": true, 00:13:57.280 "nvme_admin": false, 00:13:57.280 "nvme_io": false, 00:13:57.280 "nvme_io_md": false, 00:13:57.280 "write_zeroes": true, 00:13:57.280 "zcopy": true, 00:13:57.280 "get_zone_info": false, 00:13:57.280 "zone_management": false, 00:13:57.280 "zone_append": false, 00:13:57.280 "compare": false, 00:13:57.280 "compare_and_write": false, 00:13:57.280 "abort": true, 00:13:57.280 "seek_hole": false, 00:13:57.280 "seek_data": false, 00:13:57.280 "copy": true, 00:13:57.280 "nvme_iov_md": false 00:13:57.280 }, 00:13:57.280 "memory_domains": [ 00:13:57.280 { 00:13:57.280 "dma_device_id": "system", 00:13:57.280 "dma_device_type": 1 00:13:57.280 }, 00:13:57.280 { 00:13:57.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.280 "dma_device_type": 2 00:13:57.280 } 00:13:57.280 ], 00:13:57.280 "driver_specific": {} 00:13:57.280 } 00:13:57.280 ] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 BaseBdev4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 [ 00:13:57.280 { 00:13:57.280 "name": "BaseBdev4", 00:13:57.280 "aliases": [ 00:13:57.280 "54ac0ca8-3627-4165-9d6e-dc68f21b11fb" 00:13:57.280 ], 00:13:57.280 "product_name": "Malloc disk", 00:13:57.280 "block_size": 512, 00:13:57.280 "num_blocks": 65536, 00:13:57.280 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:57.280 "assigned_rate_limits": { 00:13:57.280 "rw_ios_per_sec": 0, 00:13:57.280 "rw_mbytes_per_sec": 0, 00:13:57.280 "r_mbytes_per_sec": 0, 00:13:57.280 "w_mbytes_per_sec": 0 00:13:57.280 }, 00:13:57.280 "claimed": false, 00:13:57.280 "zoned": false, 00:13:57.280 "supported_io_types": { 00:13:57.280 "read": true, 00:13:57.280 "write": true, 00:13:57.280 "unmap": true, 00:13:57.280 "flush": true, 00:13:57.280 "reset": true, 00:13:57.280 "nvme_admin": false, 00:13:57.280 "nvme_io": false, 00:13:57.280 "nvme_io_md": false, 00:13:57.280 "write_zeroes": true, 00:13:57.280 "zcopy": true, 00:13:57.280 "get_zone_info": false, 00:13:57.280 "zone_management": false, 00:13:57.280 "zone_append": false, 00:13:57.280 "compare": false, 00:13:57.280 "compare_and_write": false, 00:13:57.280 "abort": true, 00:13:57.280 "seek_hole": false, 00:13:57.280 "seek_data": false, 00:13:57.280 "copy": true, 00:13:57.280 "nvme_iov_md": false 00:13:57.280 }, 00:13:57.280 "memory_domains": [ 00:13:57.280 { 00:13:57.280 "dma_device_id": "system", 00:13:57.280 "dma_device_type": 1 00:13:57.280 }, 00:13:57.280 { 00:13:57.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.280 "dma_device_type": 2 00:13:57.280 } 00:13:57.280 ], 00:13:57.280 "driver_specific": {} 00:13:57.280 } 00:13:57.280 ] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.280 [2024-12-05 19:33:50.673907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.280 [2024-12-05 19:33:50.674153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.280 [2024-12-05 19:33:50.674204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.280 [2024-12-05 19:33:50.676858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.280 [2024-12-05 19:33:50.676926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.280 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.281 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.540 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.540 "name": "Existed_Raid", 00:13:57.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.540 "strip_size_kb": 64, 00:13:57.540 "state": "configuring", 00:13:57.540 "raid_level": "raid0", 00:13:57.540 "superblock": false, 00:13:57.540 "num_base_bdevs": 4, 00:13:57.540 "num_base_bdevs_discovered": 3, 00:13:57.540 "num_base_bdevs_operational": 4, 00:13:57.540 "base_bdevs_list": [ 00:13:57.540 { 00:13:57.540 "name": "BaseBdev1", 00:13:57.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.540 "is_configured": false, 00:13:57.540 "data_offset": 0, 00:13:57.540 "data_size": 0 00:13:57.540 }, 00:13:57.540 { 00:13:57.540 "name": "BaseBdev2", 00:13:57.540 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:57.540 "is_configured": true, 00:13:57.540 "data_offset": 0, 00:13:57.540 "data_size": 65536 00:13:57.540 }, 00:13:57.540 { 00:13:57.540 "name": "BaseBdev3", 00:13:57.540 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:57.540 "is_configured": true, 00:13:57.540 "data_offset": 0, 00:13:57.540 "data_size": 65536 00:13:57.540 }, 00:13:57.540 { 00:13:57.540 "name": "BaseBdev4", 00:13:57.540 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:57.540 "is_configured": true, 00:13:57.540 "data_offset": 0, 00:13:57.540 "data_size": 65536 00:13:57.540 } 00:13:57.540 ] 00:13:57.540 }' 00:13:57.540 19:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.540 19:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.799 [2024-12-05 19:33:51.202122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.799 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.059 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.059 "name": "Existed_Raid", 00:13:58.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.059 "strip_size_kb": 64, 00:13:58.059 "state": "configuring", 00:13:58.059 "raid_level": "raid0", 00:13:58.059 "superblock": false, 00:13:58.059 "num_base_bdevs": 4, 00:13:58.059 "num_base_bdevs_discovered": 2, 00:13:58.059 "num_base_bdevs_operational": 4, 00:13:58.059 "base_bdevs_list": [ 00:13:58.059 { 00:13:58.059 "name": "BaseBdev1", 00:13:58.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.059 "is_configured": false, 00:13:58.059 "data_offset": 0, 00:13:58.059 "data_size": 0 00:13:58.059 }, 00:13:58.059 { 00:13:58.059 "name": null, 00:13:58.059 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:58.059 "is_configured": false, 00:13:58.059 "data_offset": 0, 00:13:58.059 "data_size": 65536 00:13:58.059 }, 00:13:58.059 { 00:13:58.059 "name": "BaseBdev3", 00:13:58.059 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:58.059 "is_configured": true, 00:13:58.059 "data_offset": 0, 00:13:58.059 "data_size": 65536 00:13:58.059 }, 00:13:58.059 { 00:13:58.059 "name": "BaseBdev4", 00:13:58.059 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:58.059 "is_configured": true, 00:13:58.059 "data_offset": 0, 00:13:58.059 "data_size": 65536 00:13:58.059 } 00:13:58.059 ] 00:13:58.059 }' 00:13:58.059 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.059 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.317 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.317 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.317 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.317 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.577 [2024-12-05 19:33:51.815186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.577 BaseBdev1 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.577 [ 00:13:58.577 { 00:13:58.577 "name": "BaseBdev1", 00:13:58.577 "aliases": [ 00:13:58.577 "5765a6f3-edef-40c6-a77a-904846f4e45d" 00:13:58.577 ], 00:13:58.577 "product_name": "Malloc disk", 00:13:58.577 "block_size": 512, 00:13:58.577 "num_blocks": 65536, 00:13:58.577 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:13:58.577 "assigned_rate_limits": { 00:13:58.577 "rw_ios_per_sec": 0, 00:13:58.577 "rw_mbytes_per_sec": 0, 00:13:58.577 "r_mbytes_per_sec": 0, 00:13:58.577 "w_mbytes_per_sec": 0 00:13:58.577 }, 00:13:58.577 "claimed": true, 00:13:58.577 "claim_type": "exclusive_write", 00:13:58.577 "zoned": false, 00:13:58.577 "supported_io_types": { 00:13:58.577 "read": true, 00:13:58.577 "write": true, 00:13:58.577 "unmap": true, 00:13:58.577 "flush": true, 00:13:58.577 "reset": true, 00:13:58.577 "nvme_admin": false, 00:13:58.577 "nvme_io": false, 00:13:58.577 "nvme_io_md": false, 00:13:58.577 "write_zeroes": true, 00:13:58.577 "zcopy": true, 00:13:58.577 "get_zone_info": false, 00:13:58.577 "zone_management": false, 00:13:58.577 "zone_append": false, 00:13:58.577 "compare": false, 00:13:58.577 "compare_and_write": false, 00:13:58.577 "abort": true, 00:13:58.577 "seek_hole": false, 00:13:58.577 "seek_data": false, 00:13:58.577 "copy": true, 00:13:58.577 "nvme_iov_md": false 00:13:58.577 }, 00:13:58.577 "memory_domains": [ 00:13:58.577 { 00:13:58.577 "dma_device_id": "system", 00:13:58.577 "dma_device_type": 1 00:13:58.577 }, 00:13:58.577 { 00:13:58.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.577 "dma_device_type": 2 00:13:58.577 } 00:13:58.577 ], 00:13:58.577 "driver_specific": {} 00:13:58.577 } 00:13:58.577 ] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.577 "name": "Existed_Raid", 00:13:58.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.577 "strip_size_kb": 64, 00:13:58.577 "state": "configuring", 00:13:58.577 "raid_level": "raid0", 00:13:58.577 "superblock": false, 00:13:58.577 "num_base_bdevs": 4, 00:13:58.577 "num_base_bdevs_discovered": 3, 00:13:58.577 "num_base_bdevs_operational": 4, 00:13:58.577 "base_bdevs_list": [ 00:13:58.577 { 00:13:58.577 "name": "BaseBdev1", 00:13:58.577 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:13:58.577 "is_configured": true, 00:13:58.577 "data_offset": 0, 00:13:58.577 "data_size": 65536 00:13:58.577 }, 00:13:58.577 { 00:13:58.577 "name": null, 00:13:58.577 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:58.577 "is_configured": false, 00:13:58.577 "data_offset": 0, 00:13:58.577 "data_size": 65536 00:13:58.577 }, 00:13:58.577 { 00:13:58.577 "name": "BaseBdev3", 00:13:58.577 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:58.577 "is_configured": true, 00:13:58.577 "data_offset": 0, 00:13:58.577 "data_size": 65536 00:13:58.577 }, 00:13:58.577 { 00:13:58.577 "name": "BaseBdev4", 00:13:58.577 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:58.577 "is_configured": true, 00:13:58.577 "data_offset": 0, 00:13:58.577 "data_size": 65536 00:13:58.577 } 00:13:58.577 ] 00:13:58.577 }' 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.577 19:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.146 [2024-12-05 19:33:52.399426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.146 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.146 "name": "Existed_Raid", 00:13:59.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.147 "strip_size_kb": 64, 00:13:59.147 "state": "configuring", 00:13:59.147 "raid_level": "raid0", 00:13:59.147 "superblock": false, 00:13:59.147 "num_base_bdevs": 4, 00:13:59.147 "num_base_bdevs_discovered": 2, 00:13:59.147 "num_base_bdevs_operational": 4, 00:13:59.147 "base_bdevs_list": [ 00:13:59.147 { 00:13:59.147 "name": "BaseBdev1", 00:13:59.147 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:13:59.147 "is_configured": true, 00:13:59.147 "data_offset": 0, 00:13:59.147 "data_size": 65536 00:13:59.147 }, 00:13:59.147 { 00:13:59.147 "name": null, 00:13:59.147 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:59.147 "is_configured": false, 00:13:59.147 "data_offset": 0, 00:13:59.147 "data_size": 65536 00:13:59.147 }, 00:13:59.147 { 00:13:59.147 "name": null, 00:13:59.147 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:59.147 "is_configured": false, 00:13:59.147 "data_offset": 0, 00:13:59.147 "data_size": 65536 00:13:59.147 }, 00:13:59.147 { 00:13:59.147 "name": "BaseBdev4", 00:13:59.147 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:59.147 "is_configured": true, 00:13:59.147 "data_offset": 0, 00:13:59.147 "data_size": 65536 00:13:59.147 } 00:13:59.147 ] 00:13:59.147 }' 00:13:59.147 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.147 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.747 [2024-12-05 19:33:52.947630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.747 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 19:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.748 19:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.748 "name": "Existed_Raid", 00:13:59.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.748 "strip_size_kb": 64, 00:13:59.748 "state": "configuring", 00:13:59.748 "raid_level": "raid0", 00:13:59.748 "superblock": false, 00:13:59.748 "num_base_bdevs": 4, 00:13:59.748 "num_base_bdevs_discovered": 3, 00:13:59.748 "num_base_bdevs_operational": 4, 00:13:59.748 "base_bdevs_list": [ 00:13:59.748 { 00:13:59.748 "name": "BaseBdev1", 00:13:59.748 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 0, 00:13:59.748 "data_size": 65536 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": null, 00:13:59.748 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:13:59.748 "is_configured": false, 00:13:59.748 "data_offset": 0, 00:13:59.748 "data_size": 65536 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": "BaseBdev3", 00:13:59.748 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 0, 00:13:59.748 "data_size": 65536 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": "BaseBdev4", 00:13:59.748 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 0, 00:13:59.748 "data_size": 65536 00:13:59.748 } 00:13:59.748 ] 00:13:59.748 }' 00:13:59.748 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.748 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.316 [2024-12-05 19:33:53.523901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.316 "name": "Existed_Raid", 00:14:00.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.316 "strip_size_kb": 64, 00:14:00.316 "state": "configuring", 00:14:00.316 "raid_level": "raid0", 00:14:00.316 "superblock": false, 00:14:00.316 "num_base_bdevs": 4, 00:14:00.316 "num_base_bdevs_discovered": 2, 00:14:00.316 "num_base_bdevs_operational": 4, 00:14:00.316 "base_bdevs_list": [ 00:14:00.316 { 00:14:00.316 "name": null, 00:14:00.316 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:14:00.316 "is_configured": false, 00:14:00.316 "data_offset": 0, 00:14:00.316 "data_size": 65536 00:14:00.316 }, 00:14:00.316 { 00:14:00.316 "name": null, 00:14:00.316 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:14:00.316 "is_configured": false, 00:14:00.316 "data_offset": 0, 00:14:00.316 "data_size": 65536 00:14:00.316 }, 00:14:00.316 { 00:14:00.316 "name": "BaseBdev3", 00:14:00.316 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:14:00.316 "is_configured": true, 00:14:00.316 "data_offset": 0, 00:14:00.316 "data_size": 65536 00:14:00.316 }, 00:14:00.316 { 00:14:00.316 "name": "BaseBdev4", 00:14:00.316 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:14:00.316 "is_configured": true, 00:14:00.316 "data_offset": 0, 00:14:00.316 "data_size": 65536 00:14:00.316 } 00:14:00.316 ] 00:14:00.316 }' 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.316 19:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 [2024-12-05 19:33:54.183648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.884 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.884 "name": "Existed_Raid", 00:14:00.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.885 "strip_size_kb": 64, 00:14:00.885 "state": "configuring", 00:14:00.885 "raid_level": "raid0", 00:14:00.885 "superblock": false, 00:14:00.885 "num_base_bdevs": 4, 00:14:00.885 "num_base_bdevs_discovered": 3, 00:14:00.885 "num_base_bdevs_operational": 4, 00:14:00.885 "base_bdevs_list": [ 00:14:00.885 { 00:14:00.885 "name": null, 00:14:00.885 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:14:00.885 "is_configured": false, 00:14:00.885 "data_offset": 0, 00:14:00.885 "data_size": 65536 00:14:00.885 }, 00:14:00.885 { 00:14:00.885 "name": "BaseBdev2", 00:14:00.885 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:14:00.885 "is_configured": true, 00:14:00.885 "data_offset": 0, 00:14:00.885 "data_size": 65536 00:14:00.885 }, 00:14:00.885 { 00:14:00.885 "name": "BaseBdev3", 00:14:00.885 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:14:00.885 "is_configured": true, 00:14:00.885 "data_offset": 0, 00:14:00.885 "data_size": 65536 00:14:00.885 }, 00:14:00.885 { 00:14:00.885 "name": "BaseBdev4", 00:14:00.885 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:14:00.885 "is_configured": true, 00:14:00.885 "data_offset": 0, 00:14:00.885 "data_size": 65536 00:14:00.885 } 00:14:00.885 ] 00:14:00.885 }' 00:14:00.885 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.885 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5765a6f3-edef-40c6-a77a-904846f4e45d 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 [2024-12-05 19:33:54.854398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.453 [2024-12-05 19:33:54.854455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:01.453 [2024-12-05 19:33:54.854467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:01.453 [2024-12-05 19:33:54.854863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:01.453 [2024-12-05 19:33:54.855050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:01.453 [2024-12-05 19:33:54.855069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:01.453 [2024-12-05 19:33:54.855388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.453 NewBaseBdev 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.453 [ 00:14:01.453 { 00:14:01.453 "name": "NewBaseBdev", 00:14:01.453 "aliases": [ 00:14:01.453 "5765a6f3-edef-40c6-a77a-904846f4e45d" 00:14:01.453 ], 00:14:01.453 "product_name": "Malloc disk", 00:14:01.453 "block_size": 512, 00:14:01.453 "num_blocks": 65536, 00:14:01.453 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:14:01.453 "assigned_rate_limits": { 00:14:01.453 "rw_ios_per_sec": 0, 00:14:01.453 "rw_mbytes_per_sec": 0, 00:14:01.453 "r_mbytes_per_sec": 0, 00:14:01.453 "w_mbytes_per_sec": 0 00:14:01.453 }, 00:14:01.453 "claimed": true, 00:14:01.453 "claim_type": "exclusive_write", 00:14:01.453 "zoned": false, 00:14:01.453 "supported_io_types": { 00:14:01.453 "read": true, 00:14:01.453 "write": true, 00:14:01.453 "unmap": true, 00:14:01.453 "flush": true, 00:14:01.453 "reset": true, 00:14:01.453 "nvme_admin": false, 00:14:01.453 "nvme_io": false, 00:14:01.453 "nvme_io_md": false, 00:14:01.453 "write_zeroes": true, 00:14:01.453 "zcopy": true, 00:14:01.453 "get_zone_info": false, 00:14:01.453 "zone_management": false, 00:14:01.453 "zone_append": false, 00:14:01.453 "compare": false, 00:14:01.453 "compare_and_write": false, 00:14:01.453 "abort": true, 00:14:01.453 "seek_hole": false, 00:14:01.453 "seek_data": false, 00:14:01.453 "copy": true, 00:14:01.453 "nvme_iov_md": false 00:14:01.453 }, 00:14:01.453 "memory_domains": [ 00:14:01.453 { 00:14:01.453 "dma_device_id": "system", 00:14:01.453 "dma_device_type": 1 00:14:01.453 }, 00:14:01.453 { 00:14:01.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.453 "dma_device_type": 2 00:14:01.453 } 00:14:01.453 ], 00:14:01.453 "driver_specific": {} 00:14:01.453 } 00:14:01.453 ] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.453 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.454 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.713 "name": "Existed_Raid", 00:14:01.713 "uuid": "01948b51-6164-4a57-8f2d-4b8ac068d1b4", 00:14:01.713 "strip_size_kb": 64, 00:14:01.713 "state": "online", 00:14:01.713 "raid_level": "raid0", 00:14:01.713 "superblock": false, 00:14:01.713 "num_base_bdevs": 4, 00:14:01.713 "num_base_bdevs_discovered": 4, 00:14:01.713 "num_base_bdevs_operational": 4, 00:14:01.713 "base_bdevs_list": [ 00:14:01.713 { 00:14:01.713 "name": "NewBaseBdev", 00:14:01.713 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:14:01.713 "is_configured": true, 00:14:01.713 "data_offset": 0, 00:14:01.713 "data_size": 65536 00:14:01.713 }, 00:14:01.713 { 00:14:01.713 "name": "BaseBdev2", 00:14:01.713 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:14:01.713 "is_configured": true, 00:14:01.713 "data_offset": 0, 00:14:01.713 "data_size": 65536 00:14:01.713 }, 00:14:01.713 { 00:14:01.713 "name": "BaseBdev3", 00:14:01.713 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:14:01.713 "is_configured": true, 00:14:01.713 "data_offset": 0, 00:14:01.713 "data_size": 65536 00:14:01.713 }, 00:14:01.713 { 00:14:01.713 "name": "BaseBdev4", 00:14:01.713 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:14:01.713 "is_configured": true, 00:14:01.713 "data_offset": 0, 00:14:01.713 "data_size": 65536 00:14:01.713 } 00:14:01.713 ] 00:14:01.713 }' 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.713 19:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 [2024-12-05 19:33:55.435199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.280 "name": "Existed_Raid", 00:14:02.280 "aliases": [ 00:14:02.280 "01948b51-6164-4a57-8f2d-4b8ac068d1b4" 00:14:02.280 ], 00:14:02.280 "product_name": "Raid Volume", 00:14:02.280 "block_size": 512, 00:14:02.280 "num_blocks": 262144, 00:14:02.280 "uuid": "01948b51-6164-4a57-8f2d-4b8ac068d1b4", 00:14:02.280 "assigned_rate_limits": { 00:14:02.280 "rw_ios_per_sec": 0, 00:14:02.280 "rw_mbytes_per_sec": 0, 00:14:02.280 "r_mbytes_per_sec": 0, 00:14:02.280 "w_mbytes_per_sec": 0 00:14:02.280 }, 00:14:02.280 "claimed": false, 00:14:02.280 "zoned": false, 00:14:02.280 "supported_io_types": { 00:14:02.280 "read": true, 00:14:02.280 "write": true, 00:14:02.280 "unmap": true, 00:14:02.280 "flush": true, 00:14:02.280 "reset": true, 00:14:02.280 "nvme_admin": false, 00:14:02.280 "nvme_io": false, 00:14:02.280 "nvme_io_md": false, 00:14:02.280 "write_zeroes": true, 00:14:02.280 "zcopy": false, 00:14:02.280 "get_zone_info": false, 00:14:02.280 "zone_management": false, 00:14:02.280 "zone_append": false, 00:14:02.280 "compare": false, 00:14:02.280 "compare_and_write": false, 00:14:02.280 "abort": false, 00:14:02.280 "seek_hole": false, 00:14:02.280 "seek_data": false, 00:14:02.280 "copy": false, 00:14:02.280 "nvme_iov_md": false 00:14:02.280 }, 00:14:02.280 "memory_domains": [ 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "system", 00:14:02.280 "dma_device_type": 1 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.280 "dma_device_type": 2 00:14:02.280 } 00:14:02.280 ], 00:14:02.280 "driver_specific": { 00:14:02.280 "raid": { 00:14:02.280 "uuid": "01948b51-6164-4a57-8f2d-4b8ac068d1b4", 00:14:02.280 "strip_size_kb": 64, 00:14:02.280 "state": "online", 00:14:02.280 "raid_level": "raid0", 00:14:02.280 "superblock": false, 00:14:02.280 "num_base_bdevs": 4, 00:14:02.280 "num_base_bdevs_discovered": 4, 00:14:02.280 "num_base_bdevs_operational": 4, 00:14:02.280 "base_bdevs_list": [ 00:14:02.280 { 00:14:02.280 "name": "NewBaseBdev", 00:14:02.280 "uuid": "5765a6f3-edef-40c6-a77a-904846f4e45d", 00:14:02.280 "is_configured": true, 00:14:02.280 "data_offset": 0, 00:14:02.280 "data_size": 65536 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "name": "BaseBdev2", 00:14:02.280 "uuid": "ece8f9eb-48ea-46f9-b2ac-0de89b744665", 00:14:02.280 "is_configured": true, 00:14:02.280 "data_offset": 0, 00:14:02.280 "data_size": 65536 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "name": "BaseBdev3", 00:14:02.280 "uuid": "f0fb44a5-528b-4b51-aa89-ba8317eaeb0e", 00:14:02.280 "is_configured": true, 00:14:02.280 "data_offset": 0, 00:14:02.280 "data_size": 65536 00:14:02.280 }, 00:14:02.280 { 00:14:02.280 "name": "BaseBdev4", 00:14:02.280 "uuid": "54ac0ca8-3627-4165-9d6e-dc68f21b11fb", 00:14:02.280 "is_configured": true, 00:14:02.280 "data_offset": 0, 00:14:02.280 "data_size": 65536 00:14:02.280 } 00:14:02.280 ] 00:14:02.280 } 00:14:02.280 } 00:14:02.280 }' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:02.280 BaseBdev2 00:14:02.280 BaseBdev3 00:14:02.280 BaseBdev4' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.280 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.539 [2024-12-05 19:33:55.810888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.539 [2024-12-05 19:33:55.810925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.539 [2024-12-05 19:33:55.811020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.539 [2024-12-05 19:33:55.811166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.539 [2024-12-05 19:33:55.811181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69446 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69446 ']' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69446 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69446 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.539 killing process with pid 69446 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69446' 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69446 00:14:02.539 [2024-12-05 19:33:55.850127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.539 19:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69446 00:14:02.797 [2024-12-05 19:33:56.192707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:04.174 00:14:04.174 real 0m12.867s 00:14:04.174 user 0m21.256s 00:14:04.174 sys 0m1.905s 00:14:04.174 ************************************ 00:14:04.174 END TEST raid_state_function_test 00:14:04.174 ************************************ 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.174 19:33:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:04.174 19:33:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:04.174 19:33:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.174 19:33:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.174 ************************************ 00:14:04.174 START TEST raid_state_function_test_sb 00:14:04.174 ************************************ 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:04.174 Process raid pid: 70129 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70129 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70129' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70129 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70129 ']' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.174 19:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.174 [2024-12-05 19:33:57.393248] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:04.174 [2024-12-05 19:33:57.393397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.174 [2024-12-05 19:33:57.570085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.432 [2024-12-05 19:33:57.700949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.690 [2024-12-05 19:33:57.913889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.690 [2024-12-05 19:33:57.913934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.254 [2024-12-05 19:33:58.416185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.254 [2024-12-05 19:33:58.416414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.254 [2024-12-05 19:33:58.416448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.254 [2024-12-05 19:33:58.416468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.254 [2024-12-05 19:33:58.416479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.254 [2024-12-05 19:33:58.416493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.254 [2024-12-05 19:33:58.416503] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:05.254 [2024-12-05 19:33:58.416518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.254 "name": "Existed_Raid", 00:14:05.254 "uuid": "eb702bd6-204c-4cc4-87a8-4ef02f331265", 00:14:05.254 "strip_size_kb": 64, 00:14:05.254 "state": "configuring", 00:14:05.254 "raid_level": "raid0", 00:14:05.254 "superblock": true, 00:14:05.254 "num_base_bdevs": 4, 00:14:05.254 "num_base_bdevs_discovered": 0, 00:14:05.254 "num_base_bdevs_operational": 4, 00:14:05.254 "base_bdevs_list": [ 00:14:05.254 { 00:14:05.254 "name": "BaseBdev1", 00:14:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.254 "is_configured": false, 00:14:05.254 "data_offset": 0, 00:14:05.254 "data_size": 0 00:14:05.254 }, 00:14:05.254 { 00:14:05.254 "name": "BaseBdev2", 00:14:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.254 "is_configured": false, 00:14:05.254 "data_offset": 0, 00:14:05.254 "data_size": 0 00:14:05.254 }, 00:14:05.254 { 00:14:05.254 "name": "BaseBdev3", 00:14:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.254 "is_configured": false, 00:14:05.254 "data_offset": 0, 00:14:05.254 "data_size": 0 00:14:05.254 }, 00:14:05.254 { 00:14:05.254 "name": "BaseBdev4", 00:14:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.254 "is_configured": false, 00:14:05.254 "data_offset": 0, 00:14:05.254 "data_size": 0 00:14:05.254 } 00:14:05.254 ] 00:14:05.254 }' 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.254 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 [2024-12-05 19:33:58.956363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:05.821 [2024-12-05 19:33:58.956569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 [2024-12-05 19:33:58.968361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.821 [2024-12-05 19:33:58.968577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.821 [2024-12-05 19:33:58.968725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.821 [2024-12-05 19:33:58.968896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.821 [2024-12-05 19:33:58.969009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.821 [2024-12-05 19:33:58.969090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.821 [2024-12-05 19:33:58.969331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:05.821 [2024-12-05 19:33:58.969398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 [2024-12-05 19:33:59.016926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.821 BaseBdev1 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 [ 00:14:05.821 { 00:14:05.821 "name": "BaseBdev1", 00:14:05.821 "aliases": [ 00:14:05.821 "5d8a0b9c-d095-427f-ad57-12e76e55ae96" 00:14:05.821 ], 00:14:05.821 "product_name": "Malloc disk", 00:14:05.821 "block_size": 512, 00:14:05.821 "num_blocks": 65536, 00:14:05.821 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:05.821 "assigned_rate_limits": { 00:14:05.821 "rw_ios_per_sec": 0, 00:14:05.821 "rw_mbytes_per_sec": 0, 00:14:05.821 "r_mbytes_per_sec": 0, 00:14:05.821 "w_mbytes_per_sec": 0 00:14:05.821 }, 00:14:05.821 "claimed": true, 00:14:05.821 "claim_type": "exclusive_write", 00:14:05.821 "zoned": false, 00:14:05.821 "supported_io_types": { 00:14:05.821 "read": true, 00:14:05.821 "write": true, 00:14:05.821 "unmap": true, 00:14:05.821 "flush": true, 00:14:05.821 "reset": true, 00:14:05.821 "nvme_admin": false, 00:14:05.821 "nvme_io": false, 00:14:05.821 "nvme_io_md": false, 00:14:05.821 "write_zeroes": true, 00:14:05.821 "zcopy": true, 00:14:05.821 "get_zone_info": false, 00:14:05.821 "zone_management": false, 00:14:05.821 "zone_append": false, 00:14:05.821 "compare": false, 00:14:05.821 "compare_and_write": false, 00:14:05.821 "abort": true, 00:14:05.821 "seek_hole": false, 00:14:05.821 "seek_data": false, 00:14:05.821 "copy": true, 00:14:05.821 "nvme_iov_md": false 00:14:05.821 }, 00:14:05.821 "memory_domains": [ 00:14:05.821 { 00:14:05.821 "dma_device_id": "system", 00:14:05.821 "dma_device_type": 1 00:14:05.821 }, 00:14:05.821 { 00:14:05.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.821 "dma_device_type": 2 00:14:05.821 } 00:14:05.821 ], 00:14:05.821 "driver_specific": {} 00:14:05.821 } 00:14:05.821 ] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.821 "name": "Existed_Raid", 00:14:05.821 "uuid": "6fcf44a9-8ae2-45f7-a1c6-4e795ae73043", 00:14:05.821 "strip_size_kb": 64, 00:14:05.821 "state": "configuring", 00:14:05.821 "raid_level": "raid0", 00:14:05.821 "superblock": true, 00:14:05.821 "num_base_bdevs": 4, 00:14:05.821 "num_base_bdevs_discovered": 1, 00:14:05.821 "num_base_bdevs_operational": 4, 00:14:05.821 "base_bdevs_list": [ 00:14:05.821 { 00:14:05.821 "name": "BaseBdev1", 00:14:05.821 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:05.821 "is_configured": true, 00:14:05.821 "data_offset": 2048, 00:14:05.821 "data_size": 63488 00:14:05.821 }, 00:14:05.821 { 00:14:05.821 "name": "BaseBdev2", 00:14:05.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.821 "is_configured": false, 00:14:05.821 "data_offset": 0, 00:14:05.821 "data_size": 0 00:14:05.821 }, 00:14:05.821 { 00:14:05.821 "name": "BaseBdev3", 00:14:05.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.821 "is_configured": false, 00:14:05.821 "data_offset": 0, 00:14:05.821 "data_size": 0 00:14:05.821 }, 00:14:05.821 { 00:14:05.821 "name": "BaseBdev4", 00:14:05.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.821 "is_configured": false, 00:14:05.821 "data_offset": 0, 00:14:05.821 "data_size": 0 00:14:05.821 } 00:14:05.821 ] 00:14:05.821 }' 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.821 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.152 [2024-12-05 19:33:59.569176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.152 [2024-12-05 19:33:59.569239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.152 [2024-12-05 19:33:59.577250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.152 [2024-12-05 19:33:59.579934] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.152 [2024-12-05 19:33:59.580144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.152 [2024-12-05 19:33:59.580172] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.152 [2024-12-05 19:33:59.580193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.152 [2024-12-05 19:33:59.580204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:06.152 [2024-12-05 19:33:59.580218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.152 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.408 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.408 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.408 "name": "Existed_Raid", 00:14:06.408 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:06.408 "strip_size_kb": 64, 00:14:06.408 "state": "configuring", 00:14:06.408 "raid_level": "raid0", 00:14:06.408 "superblock": true, 00:14:06.408 "num_base_bdevs": 4, 00:14:06.408 "num_base_bdevs_discovered": 1, 00:14:06.409 "num_base_bdevs_operational": 4, 00:14:06.409 "base_bdevs_list": [ 00:14:06.409 { 00:14:06.409 "name": "BaseBdev1", 00:14:06.409 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:06.409 "is_configured": true, 00:14:06.409 "data_offset": 2048, 00:14:06.409 "data_size": 63488 00:14:06.409 }, 00:14:06.409 { 00:14:06.409 "name": "BaseBdev2", 00:14:06.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.409 "is_configured": false, 00:14:06.409 "data_offset": 0, 00:14:06.409 "data_size": 0 00:14:06.409 }, 00:14:06.409 { 00:14:06.409 "name": "BaseBdev3", 00:14:06.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.409 "is_configured": false, 00:14:06.409 "data_offset": 0, 00:14:06.409 "data_size": 0 00:14:06.409 }, 00:14:06.409 { 00:14:06.409 "name": "BaseBdev4", 00:14:06.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.409 "is_configured": false, 00:14:06.409 "data_offset": 0, 00:14:06.409 "data_size": 0 00:14:06.409 } 00:14:06.409 ] 00:14:06.409 }' 00:14:06.409 19:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.409 19:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 [2024-12-05 19:34:00.147828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.976 BaseBdev2 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 [ 00:14:06.976 { 00:14:06.976 "name": "BaseBdev2", 00:14:06.976 "aliases": [ 00:14:06.976 "87120f22-f90d-41f0-92fd-425c8970fb86" 00:14:06.976 ], 00:14:06.976 "product_name": "Malloc disk", 00:14:06.976 "block_size": 512, 00:14:06.976 "num_blocks": 65536, 00:14:06.976 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:06.976 "assigned_rate_limits": { 00:14:06.976 "rw_ios_per_sec": 0, 00:14:06.976 "rw_mbytes_per_sec": 0, 00:14:06.976 "r_mbytes_per_sec": 0, 00:14:06.976 "w_mbytes_per_sec": 0 00:14:06.976 }, 00:14:06.976 "claimed": true, 00:14:06.976 "claim_type": "exclusive_write", 00:14:06.976 "zoned": false, 00:14:06.976 "supported_io_types": { 00:14:06.976 "read": true, 00:14:06.976 "write": true, 00:14:06.976 "unmap": true, 00:14:06.976 "flush": true, 00:14:06.976 "reset": true, 00:14:06.976 "nvme_admin": false, 00:14:06.976 "nvme_io": false, 00:14:06.976 "nvme_io_md": false, 00:14:06.976 "write_zeroes": true, 00:14:06.976 "zcopy": true, 00:14:06.976 "get_zone_info": false, 00:14:06.976 "zone_management": false, 00:14:06.976 "zone_append": false, 00:14:06.976 "compare": false, 00:14:06.976 "compare_and_write": false, 00:14:06.976 "abort": true, 00:14:06.976 "seek_hole": false, 00:14:06.976 "seek_data": false, 00:14:06.976 "copy": true, 00:14:06.976 "nvme_iov_md": false 00:14:06.976 }, 00:14:06.976 "memory_domains": [ 00:14:06.976 { 00:14:06.976 "dma_device_id": "system", 00:14:06.976 "dma_device_type": 1 00:14:06.976 }, 00:14:06.976 { 00:14:06.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.976 "dma_device_type": 2 00:14:06.976 } 00:14:06.976 ], 00:14:06.976 "driver_specific": {} 00:14:06.976 } 00:14:06.976 ] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.976 "name": "Existed_Raid", 00:14:06.976 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:06.976 "strip_size_kb": 64, 00:14:06.976 "state": "configuring", 00:14:06.976 "raid_level": "raid0", 00:14:06.976 "superblock": true, 00:14:06.976 "num_base_bdevs": 4, 00:14:06.976 "num_base_bdevs_discovered": 2, 00:14:06.976 "num_base_bdevs_operational": 4, 00:14:06.976 "base_bdevs_list": [ 00:14:06.976 { 00:14:06.976 "name": "BaseBdev1", 00:14:06.976 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:06.976 "is_configured": true, 00:14:06.976 "data_offset": 2048, 00:14:06.976 "data_size": 63488 00:14:06.976 }, 00:14:06.976 { 00:14:06.976 "name": "BaseBdev2", 00:14:06.976 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:06.976 "is_configured": true, 00:14:06.976 "data_offset": 2048, 00:14:06.976 "data_size": 63488 00:14:06.976 }, 00:14:06.976 { 00:14:06.976 "name": "BaseBdev3", 00:14:06.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.976 "is_configured": false, 00:14:06.976 "data_offset": 0, 00:14:06.976 "data_size": 0 00:14:06.976 }, 00:14:06.976 { 00:14:06.976 "name": "BaseBdev4", 00:14:06.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.976 "is_configured": false, 00:14:06.976 "data_offset": 0, 00:14:06.976 "data_size": 0 00:14:06.976 } 00:14:06.976 ] 00:14:06.976 }' 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.976 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.541 [2024-12-05 19:34:00.789545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.541 BaseBdev3 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.541 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.541 [ 00:14:07.541 { 00:14:07.541 "name": "BaseBdev3", 00:14:07.541 "aliases": [ 00:14:07.541 "8469076b-a809-40d3-8152-8f6f9b16673c" 00:14:07.541 ], 00:14:07.541 "product_name": "Malloc disk", 00:14:07.541 "block_size": 512, 00:14:07.541 "num_blocks": 65536, 00:14:07.541 "uuid": "8469076b-a809-40d3-8152-8f6f9b16673c", 00:14:07.541 "assigned_rate_limits": { 00:14:07.541 "rw_ios_per_sec": 0, 00:14:07.541 "rw_mbytes_per_sec": 0, 00:14:07.541 "r_mbytes_per_sec": 0, 00:14:07.541 "w_mbytes_per_sec": 0 00:14:07.541 }, 00:14:07.541 "claimed": true, 00:14:07.541 "claim_type": "exclusive_write", 00:14:07.541 "zoned": false, 00:14:07.541 "supported_io_types": { 00:14:07.541 "read": true, 00:14:07.541 "write": true, 00:14:07.541 "unmap": true, 00:14:07.542 "flush": true, 00:14:07.542 "reset": true, 00:14:07.542 "nvme_admin": false, 00:14:07.542 "nvme_io": false, 00:14:07.542 "nvme_io_md": false, 00:14:07.542 "write_zeroes": true, 00:14:07.542 "zcopy": true, 00:14:07.542 "get_zone_info": false, 00:14:07.542 "zone_management": false, 00:14:07.542 "zone_append": false, 00:14:07.542 "compare": false, 00:14:07.542 "compare_and_write": false, 00:14:07.542 "abort": true, 00:14:07.542 "seek_hole": false, 00:14:07.542 "seek_data": false, 00:14:07.542 "copy": true, 00:14:07.542 "nvme_iov_md": false 00:14:07.542 }, 00:14:07.542 "memory_domains": [ 00:14:07.542 { 00:14:07.542 "dma_device_id": "system", 00:14:07.542 "dma_device_type": 1 00:14:07.542 }, 00:14:07.542 { 00:14:07.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.542 "dma_device_type": 2 00:14:07.542 } 00:14:07.542 ], 00:14:07.542 "driver_specific": {} 00:14:07.542 } 00:14:07.542 ] 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.542 "name": "Existed_Raid", 00:14:07.542 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:07.542 "strip_size_kb": 64, 00:14:07.542 "state": "configuring", 00:14:07.542 "raid_level": "raid0", 00:14:07.542 "superblock": true, 00:14:07.542 "num_base_bdevs": 4, 00:14:07.542 "num_base_bdevs_discovered": 3, 00:14:07.542 "num_base_bdevs_operational": 4, 00:14:07.542 "base_bdevs_list": [ 00:14:07.542 { 00:14:07.542 "name": "BaseBdev1", 00:14:07.542 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:07.542 "is_configured": true, 00:14:07.542 "data_offset": 2048, 00:14:07.542 "data_size": 63488 00:14:07.542 }, 00:14:07.542 { 00:14:07.542 "name": "BaseBdev2", 00:14:07.542 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:07.542 "is_configured": true, 00:14:07.542 "data_offset": 2048, 00:14:07.542 "data_size": 63488 00:14:07.542 }, 00:14:07.542 { 00:14:07.542 "name": "BaseBdev3", 00:14:07.542 "uuid": "8469076b-a809-40d3-8152-8f6f9b16673c", 00:14:07.542 "is_configured": true, 00:14:07.542 "data_offset": 2048, 00:14:07.542 "data_size": 63488 00:14:07.542 }, 00:14:07.542 { 00:14:07.542 "name": "BaseBdev4", 00:14:07.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.542 "is_configured": false, 00:14:07.542 "data_offset": 0, 00:14:07.542 "data_size": 0 00:14:07.542 } 00:14:07.542 ] 00:14:07.542 }' 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.542 19:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.107 [2024-12-05 19:34:01.391211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.107 BaseBdev4 00:14:08.107 [2024-12-05 19:34:01.391782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:08.107 [2024-12-05 19:34:01.391809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:08.107 [2024-12-05 19:34:01.392198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:08.107 [2024-12-05 19:34:01.392366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:08.107 [2024-12-05 19:34:01.392384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:08.107 [2024-12-05 19:34:01.392541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.107 [ 00:14:08.107 { 00:14:08.107 "name": "BaseBdev4", 00:14:08.107 "aliases": [ 00:14:08.107 "7d0f0d1e-2091-4e90-9b53-caabe8d48338" 00:14:08.107 ], 00:14:08.107 "product_name": "Malloc disk", 00:14:08.107 "block_size": 512, 00:14:08.107 "num_blocks": 65536, 00:14:08.107 "uuid": "7d0f0d1e-2091-4e90-9b53-caabe8d48338", 00:14:08.107 "assigned_rate_limits": { 00:14:08.107 "rw_ios_per_sec": 0, 00:14:08.107 "rw_mbytes_per_sec": 0, 00:14:08.107 "r_mbytes_per_sec": 0, 00:14:08.107 "w_mbytes_per_sec": 0 00:14:08.107 }, 00:14:08.107 "claimed": true, 00:14:08.107 "claim_type": "exclusive_write", 00:14:08.107 "zoned": false, 00:14:08.107 "supported_io_types": { 00:14:08.107 "read": true, 00:14:08.107 "write": true, 00:14:08.107 "unmap": true, 00:14:08.107 "flush": true, 00:14:08.107 "reset": true, 00:14:08.107 "nvme_admin": false, 00:14:08.107 "nvme_io": false, 00:14:08.107 "nvme_io_md": false, 00:14:08.107 "write_zeroes": true, 00:14:08.107 "zcopy": true, 00:14:08.107 "get_zone_info": false, 00:14:08.107 "zone_management": false, 00:14:08.107 "zone_append": false, 00:14:08.107 "compare": false, 00:14:08.107 "compare_and_write": false, 00:14:08.107 "abort": true, 00:14:08.107 "seek_hole": false, 00:14:08.107 "seek_data": false, 00:14:08.107 "copy": true, 00:14:08.107 "nvme_iov_md": false 00:14:08.107 }, 00:14:08.107 "memory_domains": [ 00:14:08.107 { 00:14:08.107 "dma_device_id": "system", 00:14:08.107 "dma_device_type": 1 00:14:08.107 }, 00:14:08.107 { 00:14:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.107 "dma_device_type": 2 00:14:08.107 } 00:14:08.107 ], 00:14:08.107 "driver_specific": {} 00:14:08.107 } 00:14:08.107 ] 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.107 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.108 "name": "Existed_Raid", 00:14:08.108 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:08.108 "strip_size_kb": 64, 00:14:08.108 "state": "online", 00:14:08.108 "raid_level": "raid0", 00:14:08.108 "superblock": true, 00:14:08.108 "num_base_bdevs": 4, 00:14:08.108 "num_base_bdevs_discovered": 4, 00:14:08.108 "num_base_bdevs_operational": 4, 00:14:08.108 "base_bdevs_list": [ 00:14:08.108 { 00:14:08.108 "name": "BaseBdev1", 00:14:08.108 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 2048, 00:14:08.108 "data_size": 63488 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": "BaseBdev2", 00:14:08.108 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 2048, 00:14:08.108 "data_size": 63488 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": "BaseBdev3", 00:14:08.108 "uuid": "8469076b-a809-40d3-8152-8f6f9b16673c", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 2048, 00:14:08.108 "data_size": 63488 00:14:08.108 }, 00:14:08.108 { 00:14:08.108 "name": "BaseBdev4", 00:14:08.108 "uuid": "7d0f0d1e-2091-4e90-9b53-caabe8d48338", 00:14:08.108 "is_configured": true, 00:14:08.108 "data_offset": 2048, 00:14:08.108 "data_size": 63488 00:14:08.108 } 00:14:08.108 ] 00:14:08.108 }' 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.108 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.676 [2024-12-05 19:34:01.959956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.676 19:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.676 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.676 "name": "Existed_Raid", 00:14:08.676 "aliases": [ 00:14:08.676 "d41cb563-ab97-407c-ba3e-72e1eebde167" 00:14:08.676 ], 00:14:08.676 "product_name": "Raid Volume", 00:14:08.676 "block_size": 512, 00:14:08.676 "num_blocks": 253952, 00:14:08.676 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:08.676 "assigned_rate_limits": { 00:14:08.676 "rw_ios_per_sec": 0, 00:14:08.676 "rw_mbytes_per_sec": 0, 00:14:08.676 "r_mbytes_per_sec": 0, 00:14:08.676 "w_mbytes_per_sec": 0 00:14:08.676 }, 00:14:08.676 "claimed": false, 00:14:08.676 "zoned": false, 00:14:08.676 "supported_io_types": { 00:14:08.676 "read": true, 00:14:08.676 "write": true, 00:14:08.677 "unmap": true, 00:14:08.677 "flush": true, 00:14:08.677 "reset": true, 00:14:08.677 "nvme_admin": false, 00:14:08.677 "nvme_io": false, 00:14:08.677 "nvme_io_md": false, 00:14:08.677 "write_zeroes": true, 00:14:08.677 "zcopy": false, 00:14:08.677 "get_zone_info": false, 00:14:08.677 "zone_management": false, 00:14:08.677 "zone_append": false, 00:14:08.677 "compare": false, 00:14:08.677 "compare_and_write": false, 00:14:08.677 "abort": false, 00:14:08.677 "seek_hole": false, 00:14:08.677 "seek_data": false, 00:14:08.677 "copy": false, 00:14:08.677 "nvme_iov_md": false 00:14:08.677 }, 00:14:08.677 "memory_domains": [ 00:14:08.677 { 00:14:08.677 "dma_device_id": "system", 00:14:08.677 "dma_device_type": 1 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.677 "dma_device_type": 2 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "system", 00:14:08.677 "dma_device_type": 1 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.677 "dma_device_type": 2 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "system", 00:14:08.677 "dma_device_type": 1 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.677 "dma_device_type": 2 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "system", 00:14:08.677 "dma_device_type": 1 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.677 "dma_device_type": 2 00:14:08.677 } 00:14:08.677 ], 00:14:08.677 "driver_specific": { 00:14:08.677 "raid": { 00:14:08.677 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:08.677 "strip_size_kb": 64, 00:14:08.677 "state": "online", 00:14:08.677 "raid_level": "raid0", 00:14:08.677 "superblock": true, 00:14:08.677 "num_base_bdevs": 4, 00:14:08.677 "num_base_bdevs_discovered": 4, 00:14:08.677 "num_base_bdevs_operational": 4, 00:14:08.677 "base_bdevs_list": [ 00:14:08.677 { 00:14:08.677 "name": "BaseBdev1", 00:14:08.677 "uuid": "5d8a0b9c-d095-427f-ad57-12e76e55ae96", 00:14:08.677 "is_configured": true, 00:14:08.677 "data_offset": 2048, 00:14:08.677 "data_size": 63488 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "name": "BaseBdev2", 00:14:08.677 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:08.677 "is_configured": true, 00:14:08.677 "data_offset": 2048, 00:14:08.677 "data_size": 63488 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "name": "BaseBdev3", 00:14:08.677 "uuid": "8469076b-a809-40d3-8152-8f6f9b16673c", 00:14:08.677 "is_configured": true, 00:14:08.677 "data_offset": 2048, 00:14:08.677 "data_size": 63488 00:14:08.677 }, 00:14:08.677 { 00:14:08.677 "name": "BaseBdev4", 00:14:08.677 "uuid": "7d0f0d1e-2091-4e90-9b53-caabe8d48338", 00:14:08.677 "is_configured": true, 00:14:08.677 "data_offset": 2048, 00:14:08.677 "data_size": 63488 00:14:08.677 } 00:14:08.677 ] 00:14:08.677 } 00:14:08.677 } 00:14:08.677 }' 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:08.677 BaseBdev2 00:14:08.677 BaseBdev3 00:14:08.677 BaseBdev4' 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.677 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.936 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.937 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.937 [2024-12-05 19:34:02.339762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.937 [2024-12-05 19:34:02.339923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.937 [2024-12-05 19:34:02.340213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.221 "name": "Existed_Raid", 00:14:09.221 "uuid": "d41cb563-ab97-407c-ba3e-72e1eebde167", 00:14:09.221 "strip_size_kb": 64, 00:14:09.221 "state": "offline", 00:14:09.221 "raid_level": "raid0", 00:14:09.221 "superblock": true, 00:14:09.221 "num_base_bdevs": 4, 00:14:09.221 "num_base_bdevs_discovered": 3, 00:14:09.221 "num_base_bdevs_operational": 3, 00:14:09.221 "base_bdevs_list": [ 00:14:09.221 { 00:14:09.221 "name": null, 00:14:09.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.221 "is_configured": false, 00:14:09.221 "data_offset": 0, 00:14:09.221 "data_size": 63488 00:14:09.221 }, 00:14:09.221 { 00:14:09.221 "name": "BaseBdev2", 00:14:09.221 "uuid": "87120f22-f90d-41f0-92fd-425c8970fb86", 00:14:09.221 "is_configured": true, 00:14:09.221 "data_offset": 2048, 00:14:09.221 "data_size": 63488 00:14:09.221 }, 00:14:09.221 { 00:14:09.221 "name": "BaseBdev3", 00:14:09.221 "uuid": "8469076b-a809-40d3-8152-8f6f9b16673c", 00:14:09.221 "is_configured": true, 00:14:09.221 "data_offset": 2048, 00:14:09.221 "data_size": 63488 00:14:09.221 }, 00:14:09.221 { 00:14:09.221 "name": "BaseBdev4", 00:14:09.221 "uuid": "7d0f0d1e-2091-4e90-9b53-caabe8d48338", 00:14:09.221 "is_configured": true, 00:14:09.221 "data_offset": 2048, 00:14:09.221 "data_size": 63488 00:14:09.221 } 00:14:09.221 ] 00:14:09.221 }' 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.221 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.789 19:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.789 [2024-12-05 19:34:03.054188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.789 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.789 [2024-12-05 19:34:03.199326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.049 [2024-12-05 19:34:03.342570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:10.049 [2024-12-05 19:34:03.342629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.049 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.308 BaseBdev2 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.308 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.308 [ 00:14:10.308 { 00:14:10.308 "name": "BaseBdev2", 00:14:10.308 "aliases": [ 00:14:10.308 "7277c453-1b26-4f05-91a8-d911e873c748" 00:14:10.308 ], 00:14:10.308 "product_name": "Malloc disk", 00:14:10.308 "block_size": 512, 00:14:10.308 "num_blocks": 65536, 00:14:10.308 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:10.308 "assigned_rate_limits": { 00:14:10.308 "rw_ios_per_sec": 0, 00:14:10.308 "rw_mbytes_per_sec": 0, 00:14:10.308 "r_mbytes_per_sec": 0, 00:14:10.308 "w_mbytes_per_sec": 0 00:14:10.308 }, 00:14:10.308 "claimed": false, 00:14:10.308 "zoned": false, 00:14:10.308 "supported_io_types": { 00:14:10.308 "read": true, 00:14:10.308 "write": true, 00:14:10.308 "unmap": true, 00:14:10.308 "flush": true, 00:14:10.308 "reset": true, 00:14:10.308 "nvme_admin": false, 00:14:10.308 "nvme_io": false, 00:14:10.308 "nvme_io_md": false, 00:14:10.308 "write_zeroes": true, 00:14:10.309 "zcopy": true, 00:14:10.309 "get_zone_info": false, 00:14:10.309 "zone_management": false, 00:14:10.309 "zone_append": false, 00:14:10.309 "compare": false, 00:14:10.309 "compare_and_write": false, 00:14:10.309 "abort": true, 00:14:10.309 "seek_hole": false, 00:14:10.309 "seek_data": false, 00:14:10.309 "copy": true, 00:14:10.309 "nvme_iov_md": false 00:14:10.309 }, 00:14:10.309 "memory_domains": [ 00:14:10.309 { 00:14:10.309 "dma_device_id": "system", 00:14:10.309 "dma_device_type": 1 00:14:10.309 }, 00:14:10.309 { 00:14:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.309 "dma_device_type": 2 00:14:10.309 } 00:14:10.309 ], 00:14:10.309 "driver_specific": {} 00:14:10.309 } 00:14:10.309 ] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 BaseBdev3 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 [ 00:14:10.309 { 00:14:10.309 "name": "BaseBdev3", 00:14:10.309 "aliases": [ 00:14:10.309 "a5e33d5d-cb08-4445-bdd2-4befb869cf67" 00:14:10.309 ], 00:14:10.309 "product_name": "Malloc disk", 00:14:10.309 "block_size": 512, 00:14:10.309 "num_blocks": 65536, 00:14:10.309 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:10.309 "assigned_rate_limits": { 00:14:10.309 "rw_ios_per_sec": 0, 00:14:10.309 "rw_mbytes_per_sec": 0, 00:14:10.309 "r_mbytes_per_sec": 0, 00:14:10.309 "w_mbytes_per_sec": 0 00:14:10.309 }, 00:14:10.309 "claimed": false, 00:14:10.309 "zoned": false, 00:14:10.309 "supported_io_types": { 00:14:10.309 "read": true, 00:14:10.309 "write": true, 00:14:10.309 "unmap": true, 00:14:10.309 "flush": true, 00:14:10.309 "reset": true, 00:14:10.309 "nvme_admin": false, 00:14:10.309 "nvme_io": false, 00:14:10.309 "nvme_io_md": false, 00:14:10.309 "write_zeroes": true, 00:14:10.309 "zcopy": true, 00:14:10.309 "get_zone_info": false, 00:14:10.309 "zone_management": false, 00:14:10.309 "zone_append": false, 00:14:10.309 "compare": false, 00:14:10.309 "compare_and_write": false, 00:14:10.309 "abort": true, 00:14:10.309 "seek_hole": false, 00:14:10.309 "seek_data": false, 00:14:10.309 "copy": true, 00:14:10.309 "nvme_iov_md": false 00:14:10.309 }, 00:14:10.309 "memory_domains": [ 00:14:10.309 { 00:14:10.309 "dma_device_id": "system", 00:14:10.309 "dma_device_type": 1 00:14:10.309 }, 00:14:10.309 { 00:14:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.309 "dma_device_type": 2 00:14:10.309 } 00:14:10.309 ], 00:14:10.309 "driver_specific": {} 00:14:10.309 } 00:14:10.309 ] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 BaseBdev4 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 [ 00:14:10.309 { 00:14:10.309 "name": "BaseBdev4", 00:14:10.309 "aliases": [ 00:14:10.309 "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d" 00:14:10.309 ], 00:14:10.309 "product_name": "Malloc disk", 00:14:10.309 "block_size": 512, 00:14:10.309 "num_blocks": 65536, 00:14:10.309 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:10.309 "assigned_rate_limits": { 00:14:10.309 "rw_ios_per_sec": 0, 00:14:10.309 "rw_mbytes_per_sec": 0, 00:14:10.309 "r_mbytes_per_sec": 0, 00:14:10.309 "w_mbytes_per_sec": 0 00:14:10.309 }, 00:14:10.309 "claimed": false, 00:14:10.309 "zoned": false, 00:14:10.309 "supported_io_types": { 00:14:10.309 "read": true, 00:14:10.309 "write": true, 00:14:10.309 "unmap": true, 00:14:10.309 "flush": true, 00:14:10.309 "reset": true, 00:14:10.309 "nvme_admin": false, 00:14:10.309 "nvme_io": false, 00:14:10.309 "nvme_io_md": false, 00:14:10.309 "write_zeroes": true, 00:14:10.309 "zcopy": true, 00:14:10.309 "get_zone_info": false, 00:14:10.309 "zone_management": false, 00:14:10.309 "zone_append": false, 00:14:10.309 "compare": false, 00:14:10.309 "compare_and_write": false, 00:14:10.309 "abort": true, 00:14:10.309 "seek_hole": false, 00:14:10.309 "seek_data": false, 00:14:10.309 "copy": true, 00:14:10.309 "nvme_iov_md": false 00:14:10.309 }, 00:14:10.309 "memory_domains": [ 00:14:10.309 { 00:14:10.309 "dma_device_id": "system", 00:14:10.309 "dma_device_type": 1 00:14:10.309 }, 00:14:10.309 { 00:14:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.309 "dma_device_type": 2 00:14:10.309 } 00:14:10.309 ], 00:14:10.309 "driver_specific": {} 00:14:10.309 } 00:14:10.309 ] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.309 [2024-12-05 19:34:03.695636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.309 [2024-12-05 19:34:03.695857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.309 [2024-12-05 19:34:03.695985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.309 [2024-12-05 19:34:03.698400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.309 [2024-12-05 19:34:03.698589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.309 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.310 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.569 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.569 "name": "Existed_Raid", 00:14:10.569 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:10.569 "strip_size_kb": 64, 00:14:10.569 "state": "configuring", 00:14:10.569 "raid_level": "raid0", 00:14:10.569 "superblock": true, 00:14:10.569 "num_base_bdevs": 4, 00:14:10.569 "num_base_bdevs_discovered": 3, 00:14:10.569 "num_base_bdevs_operational": 4, 00:14:10.569 "base_bdevs_list": [ 00:14:10.569 { 00:14:10.569 "name": "BaseBdev1", 00:14:10.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.569 "is_configured": false, 00:14:10.569 "data_offset": 0, 00:14:10.569 "data_size": 0 00:14:10.569 }, 00:14:10.569 { 00:14:10.569 "name": "BaseBdev2", 00:14:10.569 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:10.569 "is_configured": true, 00:14:10.569 "data_offset": 2048, 00:14:10.569 "data_size": 63488 00:14:10.569 }, 00:14:10.569 { 00:14:10.569 "name": "BaseBdev3", 00:14:10.569 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:10.569 "is_configured": true, 00:14:10.569 "data_offset": 2048, 00:14:10.569 "data_size": 63488 00:14:10.569 }, 00:14:10.569 { 00:14:10.569 "name": "BaseBdev4", 00:14:10.569 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:10.569 "is_configured": true, 00:14:10.569 "data_offset": 2048, 00:14:10.569 "data_size": 63488 00:14:10.569 } 00:14:10.569 ] 00:14:10.569 }' 00:14:10.569 19:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.569 19:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.829 [2024-12-05 19:34:04.227857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.829 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.087 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.087 "name": "Existed_Raid", 00:14:11.087 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:11.087 "strip_size_kb": 64, 00:14:11.087 "state": "configuring", 00:14:11.087 "raid_level": "raid0", 00:14:11.087 "superblock": true, 00:14:11.087 "num_base_bdevs": 4, 00:14:11.087 "num_base_bdevs_discovered": 2, 00:14:11.087 "num_base_bdevs_operational": 4, 00:14:11.087 "base_bdevs_list": [ 00:14:11.087 { 00:14:11.087 "name": "BaseBdev1", 00:14:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.087 "is_configured": false, 00:14:11.087 "data_offset": 0, 00:14:11.087 "data_size": 0 00:14:11.087 }, 00:14:11.087 { 00:14:11.087 "name": null, 00:14:11.087 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:11.087 "is_configured": false, 00:14:11.087 "data_offset": 0, 00:14:11.087 "data_size": 63488 00:14:11.087 }, 00:14:11.087 { 00:14:11.087 "name": "BaseBdev3", 00:14:11.087 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:11.087 "is_configured": true, 00:14:11.087 "data_offset": 2048, 00:14:11.087 "data_size": 63488 00:14:11.087 }, 00:14:11.087 { 00:14:11.087 "name": "BaseBdev4", 00:14:11.087 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:11.087 "is_configured": true, 00:14:11.087 "data_offset": 2048, 00:14:11.087 "data_size": 63488 00:14:11.087 } 00:14:11.087 ] 00:14:11.087 }' 00:14:11.087 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.087 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.346 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:11.346 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.346 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.346 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.346 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.605 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.606 [2024-12-05 19:34:04.833867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.606 BaseBdev1 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.606 [ 00:14:11.606 { 00:14:11.606 "name": "BaseBdev1", 00:14:11.606 "aliases": [ 00:14:11.606 "5553b3e7-9c30-441a-8c45-2b69113f54df" 00:14:11.606 ], 00:14:11.606 "product_name": "Malloc disk", 00:14:11.606 "block_size": 512, 00:14:11.606 "num_blocks": 65536, 00:14:11.606 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:11.606 "assigned_rate_limits": { 00:14:11.606 "rw_ios_per_sec": 0, 00:14:11.606 "rw_mbytes_per_sec": 0, 00:14:11.606 "r_mbytes_per_sec": 0, 00:14:11.606 "w_mbytes_per_sec": 0 00:14:11.606 }, 00:14:11.606 "claimed": true, 00:14:11.606 "claim_type": "exclusive_write", 00:14:11.606 "zoned": false, 00:14:11.606 "supported_io_types": { 00:14:11.606 "read": true, 00:14:11.606 "write": true, 00:14:11.606 "unmap": true, 00:14:11.606 "flush": true, 00:14:11.606 "reset": true, 00:14:11.606 "nvme_admin": false, 00:14:11.606 "nvme_io": false, 00:14:11.606 "nvme_io_md": false, 00:14:11.606 "write_zeroes": true, 00:14:11.606 "zcopy": true, 00:14:11.606 "get_zone_info": false, 00:14:11.606 "zone_management": false, 00:14:11.606 "zone_append": false, 00:14:11.606 "compare": false, 00:14:11.606 "compare_and_write": false, 00:14:11.606 "abort": true, 00:14:11.606 "seek_hole": false, 00:14:11.606 "seek_data": false, 00:14:11.606 "copy": true, 00:14:11.606 "nvme_iov_md": false 00:14:11.606 }, 00:14:11.606 "memory_domains": [ 00:14:11.606 { 00:14:11.606 "dma_device_id": "system", 00:14:11.606 "dma_device_type": 1 00:14:11.606 }, 00:14:11.606 { 00:14:11.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.606 "dma_device_type": 2 00:14:11.606 } 00:14:11.606 ], 00:14:11.606 "driver_specific": {} 00:14:11.606 } 00:14:11.606 ] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.606 "name": "Existed_Raid", 00:14:11.606 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:11.606 "strip_size_kb": 64, 00:14:11.606 "state": "configuring", 00:14:11.606 "raid_level": "raid0", 00:14:11.606 "superblock": true, 00:14:11.606 "num_base_bdevs": 4, 00:14:11.606 "num_base_bdevs_discovered": 3, 00:14:11.606 "num_base_bdevs_operational": 4, 00:14:11.606 "base_bdevs_list": [ 00:14:11.606 { 00:14:11.606 "name": "BaseBdev1", 00:14:11.606 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:11.606 "is_configured": true, 00:14:11.606 "data_offset": 2048, 00:14:11.606 "data_size": 63488 00:14:11.606 }, 00:14:11.606 { 00:14:11.606 "name": null, 00:14:11.606 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:11.606 "is_configured": false, 00:14:11.606 "data_offset": 0, 00:14:11.606 "data_size": 63488 00:14:11.606 }, 00:14:11.606 { 00:14:11.606 "name": "BaseBdev3", 00:14:11.606 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:11.606 "is_configured": true, 00:14:11.606 "data_offset": 2048, 00:14:11.606 "data_size": 63488 00:14:11.606 }, 00:14:11.606 { 00:14:11.606 "name": "BaseBdev4", 00:14:11.606 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:11.606 "is_configured": true, 00:14:11.606 "data_offset": 2048, 00:14:11.606 "data_size": 63488 00:14:11.606 } 00:14:11.606 ] 00:14:11.606 }' 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.606 19:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.182 [2024-12-05 19:34:05.458198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.182 "name": "Existed_Raid", 00:14:12.182 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:12.182 "strip_size_kb": 64, 00:14:12.182 "state": "configuring", 00:14:12.182 "raid_level": "raid0", 00:14:12.182 "superblock": true, 00:14:12.182 "num_base_bdevs": 4, 00:14:12.182 "num_base_bdevs_discovered": 2, 00:14:12.182 "num_base_bdevs_operational": 4, 00:14:12.182 "base_bdevs_list": [ 00:14:12.182 { 00:14:12.182 "name": "BaseBdev1", 00:14:12.182 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:12.182 "is_configured": true, 00:14:12.182 "data_offset": 2048, 00:14:12.182 "data_size": 63488 00:14:12.182 }, 00:14:12.182 { 00:14:12.182 "name": null, 00:14:12.182 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:12.182 "is_configured": false, 00:14:12.182 "data_offset": 0, 00:14:12.182 "data_size": 63488 00:14:12.182 }, 00:14:12.182 { 00:14:12.182 "name": null, 00:14:12.182 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:12.182 "is_configured": false, 00:14:12.182 "data_offset": 0, 00:14:12.182 "data_size": 63488 00:14:12.182 }, 00:14:12.182 { 00:14:12.182 "name": "BaseBdev4", 00:14:12.182 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:12.182 "is_configured": true, 00:14:12.182 "data_offset": 2048, 00:14:12.182 "data_size": 63488 00:14:12.182 } 00:14:12.182 ] 00:14:12.182 }' 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.182 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.748 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.748 19:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.748 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.748 19:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.748 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.748 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:12.748 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.749 [2024-12-05 19:34:06.050330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.749 "name": "Existed_Raid", 00:14:12.749 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:12.749 "strip_size_kb": 64, 00:14:12.749 "state": "configuring", 00:14:12.749 "raid_level": "raid0", 00:14:12.749 "superblock": true, 00:14:12.749 "num_base_bdevs": 4, 00:14:12.749 "num_base_bdevs_discovered": 3, 00:14:12.749 "num_base_bdevs_operational": 4, 00:14:12.749 "base_bdevs_list": [ 00:14:12.749 { 00:14:12.749 "name": "BaseBdev1", 00:14:12.749 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:12.749 "is_configured": true, 00:14:12.749 "data_offset": 2048, 00:14:12.749 "data_size": 63488 00:14:12.749 }, 00:14:12.749 { 00:14:12.749 "name": null, 00:14:12.749 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:12.749 "is_configured": false, 00:14:12.749 "data_offset": 0, 00:14:12.749 "data_size": 63488 00:14:12.749 }, 00:14:12.749 { 00:14:12.749 "name": "BaseBdev3", 00:14:12.749 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:12.749 "is_configured": true, 00:14:12.749 "data_offset": 2048, 00:14:12.749 "data_size": 63488 00:14:12.749 }, 00:14:12.749 { 00:14:12.749 "name": "BaseBdev4", 00:14:12.749 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:12.749 "is_configured": true, 00:14:12.749 "data_offset": 2048, 00:14:12.749 "data_size": 63488 00:14:12.749 } 00:14:12.749 ] 00:14:12.749 }' 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.749 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.316 [2024-12-05 19:34:06.654561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.316 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.317 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.575 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.575 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.575 "name": "Existed_Raid", 00:14:13.575 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:13.575 "strip_size_kb": 64, 00:14:13.575 "state": "configuring", 00:14:13.575 "raid_level": "raid0", 00:14:13.575 "superblock": true, 00:14:13.575 "num_base_bdevs": 4, 00:14:13.575 "num_base_bdevs_discovered": 2, 00:14:13.575 "num_base_bdevs_operational": 4, 00:14:13.575 "base_bdevs_list": [ 00:14:13.575 { 00:14:13.575 "name": null, 00:14:13.575 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:13.575 "is_configured": false, 00:14:13.575 "data_offset": 0, 00:14:13.575 "data_size": 63488 00:14:13.575 }, 00:14:13.575 { 00:14:13.575 "name": null, 00:14:13.575 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:13.575 "is_configured": false, 00:14:13.575 "data_offset": 0, 00:14:13.575 "data_size": 63488 00:14:13.575 }, 00:14:13.575 { 00:14:13.575 "name": "BaseBdev3", 00:14:13.575 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:13.575 "is_configured": true, 00:14:13.575 "data_offset": 2048, 00:14:13.575 "data_size": 63488 00:14:13.575 }, 00:14:13.575 { 00:14:13.575 "name": "BaseBdev4", 00:14:13.575 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:13.575 "is_configured": true, 00:14:13.575 "data_offset": 2048, 00:14:13.575 "data_size": 63488 00:14:13.575 } 00:14:13.575 ] 00:14:13.575 }' 00:14:13.575 19:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.575 19:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.834 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.834 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.834 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.834 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 [2024-12-05 19:34:07.313248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.093 "name": "Existed_Raid", 00:14:14.093 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:14.093 "strip_size_kb": 64, 00:14:14.093 "state": "configuring", 00:14:14.093 "raid_level": "raid0", 00:14:14.093 "superblock": true, 00:14:14.093 "num_base_bdevs": 4, 00:14:14.093 "num_base_bdevs_discovered": 3, 00:14:14.093 "num_base_bdevs_operational": 4, 00:14:14.093 "base_bdevs_list": [ 00:14:14.093 { 00:14:14.093 "name": null, 00:14:14.093 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:14.093 "is_configured": false, 00:14:14.093 "data_offset": 0, 00:14:14.093 "data_size": 63488 00:14:14.093 }, 00:14:14.093 { 00:14:14.093 "name": "BaseBdev2", 00:14:14.093 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:14.093 "is_configured": true, 00:14:14.093 "data_offset": 2048, 00:14:14.093 "data_size": 63488 00:14:14.093 }, 00:14:14.093 { 00:14:14.093 "name": "BaseBdev3", 00:14:14.093 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:14.093 "is_configured": true, 00:14:14.093 "data_offset": 2048, 00:14:14.093 "data_size": 63488 00:14:14.093 }, 00:14:14.093 { 00:14:14.093 "name": "BaseBdev4", 00:14:14.093 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:14.093 "is_configured": true, 00:14:14.093 "data_offset": 2048, 00:14:14.093 "data_size": 63488 00:14:14.093 } 00:14:14.093 ] 00:14:14.093 }' 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.093 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5553b3e7-9c30-441a-8c45-2b69113f54df 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.664 19:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 [2024-12-05 19:34:08.008562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:14.664 NewBaseBdev 00:14:14.664 [2024-12-05 19:34:08.009060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:14.664 [2024-12-05 19:34:08.009085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:14.664 [2024-12-05 19:34:08.009422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:14.664 [2024-12-05 19:34:08.009593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:14.664 [2024-12-05 19:34:08.009613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:14.664 [2024-12-05 19:34:08.009801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.664 [ 00:14:14.664 { 00:14:14.664 "name": "NewBaseBdev", 00:14:14.664 "aliases": [ 00:14:14.664 "5553b3e7-9c30-441a-8c45-2b69113f54df" 00:14:14.664 ], 00:14:14.664 "product_name": "Malloc disk", 00:14:14.664 "block_size": 512, 00:14:14.664 "num_blocks": 65536, 00:14:14.664 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:14.664 "assigned_rate_limits": { 00:14:14.664 "rw_ios_per_sec": 0, 00:14:14.664 "rw_mbytes_per_sec": 0, 00:14:14.664 "r_mbytes_per_sec": 0, 00:14:14.664 "w_mbytes_per_sec": 0 00:14:14.664 }, 00:14:14.664 "claimed": true, 00:14:14.664 "claim_type": "exclusive_write", 00:14:14.664 "zoned": false, 00:14:14.664 "supported_io_types": { 00:14:14.664 "read": true, 00:14:14.664 "write": true, 00:14:14.664 "unmap": true, 00:14:14.664 "flush": true, 00:14:14.664 "reset": true, 00:14:14.664 "nvme_admin": false, 00:14:14.664 "nvme_io": false, 00:14:14.664 "nvme_io_md": false, 00:14:14.664 "write_zeroes": true, 00:14:14.664 "zcopy": true, 00:14:14.664 "get_zone_info": false, 00:14:14.664 "zone_management": false, 00:14:14.664 "zone_append": false, 00:14:14.664 "compare": false, 00:14:14.664 "compare_and_write": false, 00:14:14.664 "abort": true, 00:14:14.664 "seek_hole": false, 00:14:14.664 "seek_data": false, 00:14:14.664 "copy": true, 00:14:14.664 "nvme_iov_md": false 00:14:14.664 }, 00:14:14.664 "memory_domains": [ 00:14:14.664 { 00:14:14.664 "dma_device_id": "system", 00:14:14.664 "dma_device_type": 1 00:14:14.664 }, 00:14:14.664 { 00:14:14.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.664 "dma_device_type": 2 00:14:14.664 } 00:14:14.664 ], 00:14:14.664 "driver_specific": {} 00:14:14.664 } 00:14:14.664 ] 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.664 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.665 "name": "Existed_Raid", 00:14:14.665 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:14.665 "strip_size_kb": 64, 00:14:14.665 "state": "online", 00:14:14.665 "raid_level": "raid0", 00:14:14.665 "superblock": true, 00:14:14.665 "num_base_bdevs": 4, 00:14:14.665 "num_base_bdevs_discovered": 4, 00:14:14.665 "num_base_bdevs_operational": 4, 00:14:14.665 "base_bdevs_list": [ 00:14:14.665 { 00:14:14.665 "name": "NewBaseBdev", 00:14:14.665 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:14.665 "is_configured": true, 00:14:14.665 "data_offset": 2048, 00:14:14.665 "data_size": 63488 00:14:14.665 }, 00:14:14.665 { 00:14:14.665 "name": "BaseBdev2", 00:14:14.665 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:14.665 "is_configured": true, 00:14:14.665 "data_offset": 2048, 00:14:14.665 "data_size": 63488 00:14:14.665 }, 00:14:14.665 { 00:14:14.665 "name": "BaseBdev3", 00:14:14.665 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:14.665 "is_configured": true, 00:14:14.665 "data_offset": 2048, 00:14:14.665 "data_size": 63488 00:14:14.665 }, 00:14:14.665 { 00:14:14.665 "name": "BaseBdev4", 00:14:14.665 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:14.665 "is_configured": true, 00:14:14.665 "data_offset": 2048, 00:14:14.665 "data_size": 63488 00:14:14.665 } 00:14:14.665 ] 00:14:14.665 }' 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.665 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.232 [2024-12-05 19:34:08.585300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.232 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.232 "name": "Existed_Raid", 00:14:15.232 "aliases": [ 00:14:15.232 "cda78298-5b27-482d-b09f-dc129435d7fe" 00:14:15.232 ], 00:14:15.232 "product_name": "Raid Volume", 00:14:15.232 "block_size": 512, 00:14:15.232 "num_blocks": 253952, 00:14:15.232 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:15.232 "assigned_rate_limits": { 00:14:15.233 "rw_ios_per_sec": 0, 00:14:15.233 "rw_mbytes_per_sec": 0, 00:14:15.233 "r_mbytes_per_sec": 0, 00:14:15.233 "w_mbytes_per_sec": 0 00:14:15.233 }, 00:14:15.233 "claimed": false, 00:14:15.233 "zoned": false, 00:14:15.233 "supported_io_types": { 00:14:15.233 "read": true, 00:14:15.233 "write": true, 00:14:15.233 "unmap": true, 00:14:15.233 "flush": true, 00:14:15.233 "reset": true, 00:14:15.233 "nvme_admin": false, 00:14:15.233 "nvme_io": false, 00:14:15.233 "nvme_io_md": false, 00:14:15.233 "write_zeroes": true, 00:14:15.233 "zcopy": false, 00:14:15.233 "get_zone_info": false, 00:14:15.233 "zone_management": false, 00:14:15.233 "zone_append": false, 00:14:15.233 "compare": false, 00:14:15.233 "compare_and_write": false, 00:14:15.233 "abort": false, 00:14:15.233 "seek_hole": false, 00:14:15.233 "seek_data": false, 00:14:15.233 "copy": false, 00:14:15.233 "nvme_iov_md": false 00:14:15.233 }, 00:14:15.233 "memory_domains": [ 00:14:15.233 { 00:14:15.233 "dma_device_id": "system", 00:14:15.233 "dma_device_type": 1 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.233 "dma_device_type": 2 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "system", 00:14:15.233 "dma_device_type": 1 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.233 "dma_device_type": 2 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "system", 00:14:15.233 "dma_device_type": 1 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.233 "dma_device_type": 2 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "system", 00:14:15.233 "dma_device_type": 1 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.233 "dma_device_type": 2 00:14:15.233 } 00:14:15.233 ], 00:14:15.233 "driver_specific": { 00:14:15.233 "raid": { 00:14:15.233 "uuid": "cda78298-5b27-482d-b09f-dc129435d7fe", 00:14:15.233 "strip_size_kb": 64, 00:14:15.233 "state": "online", 00:14:15.233 "raid_level": "raid0", 00:14:15.233 "superblock": true, 00:14:15.233 "num_base_bdevs": 4, 00:14:15.233 "num_base_bdevs_discovered": 4, 00:14:15.233 "num_base_bdevs_operational": 4, 00:14:15.233 "base_bdevs_list": [ 00:14:15.233 { 00:14:15.233 "name": "NewBaseBdev", 00:14:15.233 "uuid": "5553b3e7-9c30-441a-8c45-2b69113f54df", 00:14:15.233 "is_configured": true, 00:14:15.233 "data_offset": 2048, 00:14:15.233 "data_size": 63488 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "name": "BaseBdev2", 00:14:15.233 "uuid": "7277c453-1b26-4f05-91a8-d911e873c748", 00:14:15.233 "is_configured": true, 00:14:15.233 "data_offset": 2048, 00:14:15.233 "data_size": 63488 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "name": "BaseBdev3", 00:14:15.233 "uuid": "a5e33d5d-cb08-4445-bdd2-4befb869cf67", 00:14:15.233 "is_configured": true, 00:14:15.233 "data_offset": 2048, 00:14:15.233 "data_size": 63488 00:14:15.233 }, 00:14:15.233 { 00:14:15.233 "name": "BaseBdev4", 00:14:15.233 "uuid": "7bff9ba1-1d7c-4c1f-b199-e6091a57d89d", 00:14:15.233 "is_configured": true, 00:14:15.233 "data_offset": 2048, 00:14:15.233 "data_size": 63488 00:14:15.233 } 00:14:15.233 ] 00:14:15.233 } 00:14:15.233 } 00:14:15.233 }' 00:14:15.233 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.491 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:15.492 BaseBdev2 00:14:15.492 BaseBdev3 00:14:15.492 BaseBdev4' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.492 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.750 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.750 [2024-12-05 19:34:08.988909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.750 [2024-12-05 19:34:08.988945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.751 [2024-12-05 19:34:08.989036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.751 [2024-12-05 19:34:08.989156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.751 [2024-12-05 19:34:08.989172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70129 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70129 ']' 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70129 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.751 19:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70129 00:14:15.751 19:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.751 killing process with pid 70129 00:14:15.751 19:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.751 19:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70129' 00:14:15.751 19:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70129 00:14:15.751 [2024-12-05 19:34:09.023367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.751 19:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70129 00:14:16.010 [2024-12-05 19:34:09.364667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.384 19:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:17.384 00:14:17.384 real 0m13.129s 00:14:17.384 user 0m21.855s 00:14:17.384 sys 0m1.795s 00:14:17.384 19:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.384 ************************************ 00:14:17.384 END TEST raid_state_function_test_sb 00:14:17.384 ************************************ 00:14:17.384 19:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.384 19:34:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:17.384 19:34:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:17.384 19:34:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.384 19:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.384 ************************************ 00:14:17.384 START TEST raid_superblock_test 00:14:17.384 ************************************ 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:17.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70818 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70818 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70818 ']' 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.384 19:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.384 [2024-12-05 19:34:10.581292] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:17.384 [2024-12-05 19:34:10.581790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:14:17.384 [2024-12-05 19:34:10.754598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.652 [2024-12-05 19:34:10.885655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.931 [2024-12-05 19:34:11.088661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.931 [2024-12-05 19:34:11.088723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 malloc1 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 [2024-12-05 19:34:11.532103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:18.189 [2024-12-05 19:34:11.532313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.189 [2024-12-05 19:34:11.532393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:18.189 [2024-12-05 19:34:11.532660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.189 [2024-12-05 19:34:11.535548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.189 [2024-12-05 19:34:11.535756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:18.189 pt1 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:18.189 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.190 malloc2 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.190 [2024-12-05 19:34:11.589756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:18.190 [2024-12-05 19:34:11.589832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.190 [2024-12-05 19:34:11.589869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:18.190 [2024-12-05 19:34:11.589885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.190 [2024-12-05 19:34:11.592740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.190 [2024-12-05 19:34:11.592792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:18.190 pt2 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.190 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.447 malloc3 00:14:18.447 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.447 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.448 [2024-12-05 19:34:11.654992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:18.448 [2024-12-05 19:34:11.655233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.448 [2024-12-05 19:34:11.655313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:18.448 [2024-12-05 19:34:11.655421] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.448 [2024-12-05 19:34:11.658447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.448 [2024-12-05 19:34:11.658612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:18.448 pt3 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.448 malloc4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.448 [2024-12-05 19:34:11.711491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:18.448 [2024-12-05 19:34:11.711749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.448 [2024-12-05 19:34:11.711825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:18.448 [2024-12-05 19:34:11.711941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.448 [2024-12-05 19:34:11.714749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.448 [2024-12-05 19:34:11.714918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:18.448 pt4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.448 [2024-12-05 19:34:11.723654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:18.448 [2024-12-05 19:34:11.726180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:18.448 [2024-12-05 19:34:11.726419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:18.448 [2024-12-05 19:34:11.726537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:18.448 [2024-12-05 19:34:11.726982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:18.448 [2024-12-05 19:34:11.727023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:18.448 [2024-12-05 19:34:11.727353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:18.448 [2024-12-05 19:34:11.727577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:18.448 [2024-12-05 19:34:11.727612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:18.448 [2024-12-05 19:34:11.727909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.448 "name": "raid_bdev1", 00:14:18.448 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:18.448 "strip_size_kb": 64, 00:14:18.448 "state": "online", 00:14:18.448 "raid_level": "raid0", 00:14:18.448 "superblock": true, 00:14:18.448 "num_base_bdevs": 4, 00:14:18.448 "num_base_bdevs_discovered": 4, 00:14:18.448 "num_base_bdevs_operational": 4, 00:14:18.448 "base_bdevs_list": [ 00:14:18.448 { 00:14:18.448 "name": "pt1", 00:14:18.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.448 "is_configured": true, 00:14:18.448 "data_offset": 2048, 00:14:18.448 "data_size": 63488 00:14:18.448 }, 00:14:18.448 { 00:14:18.448 "name": "pt2", 00:14:18.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.448 "is_configured": true, 00:14:18.448 "data_offset": 2048, 00:14:18.448 "data_size": 63488 00:14:18.448 }, 00:14:18.448 { 00:14:18.448 "name": "pt3", 00:14:18.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.448 "is_configured": true, 00:14:18.448 "data_offset": 2048, 00:14:18.448 "data_size": 63488 00:14:18.448 }, 00:14:18.448 { 00:14:18.448 "name": "pt4", 00:14:18.448 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.448 "is_configured": true, 00:14:18.448 "data_offset": 2048, 00:14:18.448 "data_size": 63488 00:14:18.448 } 00:14:18.448 ] 00:14:18.448 }' 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.448 19:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.015 [2024-12-05 19:34:12.272437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.015 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.015 "name": "raid_bdev1", 00:14:19.015 "aliases": [ 00:14:19.015 "1cffca14-fe44-4926-a732-c55d491a9c75" 00:14:19.015 ], 00:14:19.015 "product_name": "Raid Volume", 00:14:19.015 "block_size": 512, 00:14:19.015 "num_blocks": 253952, 00:14:19.015 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:19.015 "assigned_rate_limits": { 00:14:19.015 "rw_ios_per_sec": 0, 00:14:19.015 "rw_mbytes_per_sec": 0, 00:14:19.015 "r_mbytes_per_sec": 0, 00:14:19.015 "w_mbytes_per_sec": 0 00:14:19.015 }, 00:14:19.015 "claimed": false, 00:14:19.015 "zoned": false, 00:14:19.015 "supported_io_types": { 00:14:19.015 "read": true, 00:14:19.015 "write": true, 00:14:19.015 "unmap": true, 00:14:19.015 "flush": true, 00:14:19.015 "reset": true, 00:14:19.015 "nvme_admin": false, 00:14:19.015 "nvme_io": false, 00:14:19.015 "nvme_io_md": false, 00:14:19.015 "write_zeroes": true, 00:14:19.015 "zcopy": false, 00:14:19.015 "get_zone_info": false, 00:14:19.015 "zone_management": false, 00:14:19.015 "zone_append": false, 00:14:19.015 "compare": false, 00:14:19.015 "compare_and_write": false, 00:14:19.015 "abort": false, 00:14:19.015 "seek_hole": false, 00:14:19.015 "seek_data": false, 00:14:19.015 "copy": false, 00:14:19.015 "nvme_iov_md": false 00:14:19.015 }, 00:14:19.015 "memory_domains": [ 00:14:19.015 { 00:14:19.015 "dma_device_id": "system", 00:14:19.015 "dma_device_type": 1 00:14:19.015 }, 00:14:19.015 { 00:14:19.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.015 "dma_device_type": 2 00:14:19.015 }, 00:14:19.015 { 00:14:19.015 "dma_device_id": "system", 00:14:19.015 "dma_device_type": 1 00:14:19.015 }, 00:14:19.015 { 00:14:19.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.015 "dma_device_type": 2 00:14:19.015 }, 00:14:19.015 { 00:14:19.015 "dma_device_id": "system", 00:14:19.015 "dma_device_type": 1 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.016 "dma_device_type": 2 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "dma_device_id": "system", 00:14:19.016 "dma_device_type": 1 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.016 "dma_device_type": 2 00:14:19.016 } 00:14:19.016 ], 00:14:19.016 "driver_specific": { 00:14:19.016 "raid": { 00:14:19.016 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:19.016 "strip_size_kb": 64, 00:14:19.016 "state": "online", 00:14:19.016 "raid_level": "raid0", 00:14:19.016 "superblock": true, 00:14:19.016 "num_base_bdevs": 4, 00:14:19.016 "num_base_bdevs_discovered": 4, 00:14:19.016 "num_base_bdevs_operational": 4, 00:14:19.016 "base_bdevs_list": [ 00:14:19.016 { 00:14:19.016 "name": "pt1", 00:14:19.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.016 "is_configured": true, 00:14:19.016 "data_offset": 2048, 00:14:19.016 "data_size": 63488 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "name": "pt2", 00:14:19.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.016 "is_configured": true, 00:14:19.016 "data_offset": 2048, 00:14:19.016 "data_size": 63488 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "name": "pt3", 00:14:19.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.016 "is_configured": true, 00:14:19.016 "data_offset": 2048, 00:14:19.016 "data_size": 63488 00:14:19.016 }, 00:14:19.016 { 00:14:19.016 "name": "pt4", 00:14:19.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.016 "is_configured": true, 00:14:19.016 "data_offset": 2048, 00:14:19.016 "data_size": 63488 00:14:19.016 } 00:14:19.016 ] 00:14:19.016 } 00:14:19.016 } 00:14:19.016 }' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:19.016 pt2 00:14:19.016 pt3 00:14:19.016 pt4' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.016 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.274 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 [2024-12-05 19:34:12.644457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1cffca14-fe44-4926-a732-c55d491a9c75 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1cffca14-fe44-4926-a732-c55d491a9c75 ']' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 [2024-12-05 19:34:12.696113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.275 [2024-12-05 19:34:12.696281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.275 [2024-12-05 19:34:12.696498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.275 [2024-12-05 19:34:12.696684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.275 [2024-12-05 19:34:12.696885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.534 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.534 [2024-12-05 19:34:12.852184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:19.534 [2024-12-05 19:34:12.854897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:19.534 [2024-12-05 19:34:12.854963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:19.534 [2024-12-05 19:34:12.855019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:19.534 [2024-12-05 19:34:12.855095] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:19.534 [2024-12-05 19:34:12.855167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:19.534 [2024-12-05 19:34:12.855202] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:19.534 [2024-12-05 19:34:12.855235] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:19.534 [2024-12-05 19:34:12.855258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.534 [2024-12-05 19:34:12.855279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:19.534 request: 00:14:19.534 { 00:14:19.534 "name": "raid_bdev1", 00:14:19.534 "raid_level": "raid0", 00:14:19.534 "base_bdevs": [ 00:14:19.535 "malloc1", 00:14:19.535 "malloc2", 00:14:19.535 "malloc3", 00:14:19.535 "malloc4" 00:14:19.535 ], 00:14:19.535 "strip_size_kb": 64, 00:14:19.535 "superblock": false, 00:14:19.535 "method": "bdev_raid_create", 00:14:19.535 "req_id": 1 00:14:19.535 } 00:14:19.535 Got JSON-RPC error response 00:14:19.535 response: 00:14:19.535 { 00:14:19.535 "code": -17, 00:14:19.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:19.535 } 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.535 [2024-12-05 19:34:12.920171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:19.535 [2024-12-05 19:34:12.920359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.535 [2024-12-05 19:34:12.920433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:19.535 [2024-12-05 19:34:12.920597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.535 [2024-12-05 19:34:12.923473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.535 [2024-12-05 19:34:12.923630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:19.535 [2024-12-05 19:34:12.923893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:19.535 [2024-12-05 19:34:12.924087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.535 pt1 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.535 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.794 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.794 "name": "raid_bdev1", 00:14:19.794 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:19.794 "strip_size_kb": 64, 00:14:19.794 "state": "configuring", 00:14:19.794 "raid_level": "raid0", 00:14:19.794 "superblock": true, 00:14:19.794 "num_base_bdevs": 4, 00:14:19.794 "num_base_bdevs_discovered": 1, 00:14:19.794 "num_base_bdevs_operational": 4, 00:14:19.794 "base_bdevs_list": [ 00:14:19.794 { 00:14:19.794 "name": "pt1", 00:14:19.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.794 "is_configured": true, 00:14:19.794 "data_offset": 2048, 00:14:19.794 "data_size": 63488 00:14:19.794 }, 00:14:19.794 { 00:14:19.794 "name": null, 00:14:19.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.794 "is_configured": false, 00:14:19.794 "data_offset": 2048, 00:14:19.794 "data_size": 63488 00:14:19.794 }, 00:14:19.794 { 00:14:19.794 "name": null, 00:14:19.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.794 "is_configured": false, 00:14:19.794 "data_offset": 2048, 00:14:19.794 "data_size": 63488 00:14:19.794 }, 00:14:19.794 { 00:14:19.794 "name": null, 00:14:19.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.794 "is_configured": false, 00:14:19.794 "data_offset": 2048, 00:14:19.794 "data_size": 63488 00:14:19.794 } 00:14:19.794 ] 00:14:19.794 }' 00:14:19.794 19:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.794 19:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.053 [2024-12-05 19:34:13.444604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.053 [2024-12-05 19:34:13.444706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.053 [2024-12-05 19:34:13.444767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:20.053 [2024-12-05 19:34:13.444786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.053 [2024-12-05 19:34:13.445423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.053 [2024-12-05 19:34:13.445461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.053 [2024-12-05 19:34:13.445566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:20.053 [2024-12-05 19:34:13.445612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.053 pt2 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.053 [2024-12-05 19:34:13.452577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.053 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.311 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.311 "name": "raid_bdev1", 00:14:20.311 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:20.311 "strip_size_kb": 64, 00:14:20.311 "state": "configuring", 00:14:20.311 "raid_level": "raid0", 00:14:20.311 "superblock": true, 00:14:20.311 "num_base_bdevs": 4, 00:14:20.311 "num_base_bdevs_discovered": 1, 00:14:20.311 "num_base_bdevs_operational": 4, 00:14:20.311 "base_bdevs_list": [ 00:14:20.311 { 00:14:20.311 "name": "pt1", 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.311 "is_configured": true, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 }, 00:14:20.311 { 00:14:20.311 "name": null, 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.311 "is_configured": false, 00:14:20.311 "data_offset": 0, 00:14:20.311 "data_size": 63488 00:14:20.311 }, 00:14:20.311 { 00:14:20.311 "name": null, 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.311 "is_configured": false, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 }, 00:14:20.311 { 00:14:20.311 "name": null, 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.311 "is_configured": false, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 } 00:14:20.311 ] 00:14:20.311 }' 00:14:20.311 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.311 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 [2024-12-05 19:34:13.988854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.570 [2024-12-05 19:34:13.989071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.570 [2024-12-05 19:34:13.989146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:20.570 [2024-12-05 19:34:13.989260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.570 [2024-12-05 19:34:13.989873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.570 [2024-12-05 19:34:13.989899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.570 [2024-12-05 19:34:13.990007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:20.570 [2024-12-05 19:34:13.990041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.570 pt2 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.570 19:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 [2024-12-05 19:34:13.996768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:20.570 [2024-12-05 19:34:13.996944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.570 [2024-12-05 19:34:13.997020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:20.570 [2024-12-05 19:34:13.997211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.570 [2024-12-05 19:34:13.997761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.570 [2024-12-05 19:34:13.997912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:20.570 [2024-12-05 19:34:13.998124] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:20.570 [2024-12-05 19:34:13.998276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:20.570 pt3 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.570 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 [2024-12-05 19:34:14.004720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:20.570 [2024-12-05 19:34:14.004766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.570 [2024-12-05 19:34:14.004792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:20.570 [2024-12-05 19:34:14.004806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.570 [2024-12-05 19:34:14.005259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.570 [2024-12-05 19:34:14.005300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:20.570 [2024-12-05 19:34:14.005382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:20.570 [2024-12-05 19:34:14.005415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:20.570 [2024-12-05 19:34:14.005586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:20.570 [2024-12-05 19:34:14.005608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:20.571 [2024-12-05 19:34:14.005934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:20.571 [2024-12-05 19:34:14.006126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:20.571 [2024-12-05 19:34:14.006166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:20.571 [2024-12-05 19:34:14.006331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.830 pt4 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.830 "name": "raid_bdev1", 00:14:20.830 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:20.830 "strip_size_kb": 64, 00:14:20.830 "state": "online", 00:14:20.830 "raid_level": "raid0", 00:14:20.830 "superblock": true, 00:14:20.830 "num_base_bdevs": 4, 00:14:20.830 "num_base_bdevs_discovered": 4, 00:14:20.830 "num_base_bdevs_operational": 4, 00:14:20.830 "base_bdevs_list": [ 00:14:20.830 { 00:14:20.830 "name": "pt1", 00:14:20.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.830 "is_configured": true, 00:14:20.830 "data_offset": 2048, 00:14:20.830 "data_size": 63488 00:14:20.830 }, 00:14:20.830 { 00:14:20.830 "name": "pt2", 00:14:20.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.830 "is_configured": true, 00:14:20.830 "data_offset": 2048, 00:14:20.830 "data_size": 63488 00:14:20.830 }, 00:14:20.830 { 00:14:20.830 "name": "pt3", 00:14:20.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.830 "is_configured": true, 00:14:20.830 "data_offset": 2048, 00:14:20.830 "data_size": 63488 00:14:20.830 }, 00:14:20.830 { 00:14:20.830 "name": "pt4", 00:14:20.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.830 "is_configured": true, 00:14:20.830 "data_offset": 2048, 00:14:20.830 "data_size": 63488 00:14:20.830 } 00:14:20.830 ] 00:14:20.830 }' 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.830 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.089 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 [2024-12-05 19:34:14.533422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.349 "name": "raid_bdev1", 00:14:21.349 "aliases": [ 00:14:21.349 "1cffca14-fe44-4926-a732-c55d491a9c75" 00:14:21.349 ], 00:14:21.349 "product_name": "Raid Volume", 00:14:21.349 "block_size": 512, 00:14:21.349 "num_blocks": 253952, 00:14:21.349 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:21.349 "assigned_rate_limits": { 00:14:21.349 "rw_ios_per_sec": 0, 00:14:21.349 "rw_mbytes_per_sec": 0, 00:14:21.349 "r_mbytes_per_sec": 0, 00:14:21.349 "w_mbytes_per_sec": 0 00:14:21.349 }, 00:14:21.349 "claimed": false, 00:14:21.349 "zoned": false, 00:14:21.349 "supported_io_types": { 00:14:21.349 "read": true, 00:14:21.349 "write": true, 00:14:21.349 "unmap": true, 00:14:21.349 "flush": true, 00:14:21.349 "reset": true, 00:14:21.349 "nvme_admin": false, 00:14:21.349 "nvme_io": false, 00:14:21.349 "nvme_io_md": false, 00:14:21.349 "write_zeroes": true, 00:14:21.349 "zcopy": false, 00:14:21.349 "get_zone_info": false, 00:14:21.349 "zone_management": false, 00:14:21.349 "zone_append": false, 00:14:21.349 "compare": false, 00:14:21.349 "compare_and_write": false, 00:14:21.349 "abort": false, 00:14:21.349 "seek_hole": false, 00:14:21.349 "seek_data": false, 00:14:21.349 "copy": false, 00:14:21.349 "nvme_iov_md": false 00:14:21.349 }, 00:14:21.349 "memory_domains": [ 00:14:21.349 { 00:14:21.349 "dma_device_id": "system", 00:14:21.349 "dma_device_type": 1 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.349 "dma_device_type": 2 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "system", 00:14:21.349 "dma_device_type": 1 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.349 "dma_device_type": 2 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "system", 00:14:21.349 "dma_device_type": 1 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.349 "dma_device_type": 2 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "system", 00:14:21.349 "dma_device_type": 1 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.349 "dma_device_type": 2 00:14:21.349 } 00:14:21.349 ], 00:14:21.349 "driver_specific": { 00:14:21.349 "raid": { 00:14:21.349 "uuid": "1cffca14-fe44-4926-a732-c55d491a9c75", 00:14:21.349 "strip_size_kb": 64, 00:14:21.349 "state": "online", 00:14:21.349 "raid_level": "raid0", 00:14:21.349 "superblock": true, 00:14:21.349 "num_base_bdevs": 4, 00:14:21.349 "num_base_bdevs_discovered": 4, 00:14:21.349 "num_base_bdevs_operational": 4, 00:14:21.349 "base_bdevs_list": [ 00:14:21.349 { 00:14:21.349 "name": "pt1", 00:14:21.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.349 "is_configured": true, 00:14:21.349 "data_offset": 2048, 00:14:21.349 "data_size": 63488 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "name": "pt2", 00:14:21.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.349 "is_configured": true, 00:14:21.349 "data_offset": 2048, 00:14:21.349 "data_size": 63488 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "name": "pt3", 00:14:21.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.349 "is_configured": true, 00:14:21.349 "data_offset": 2048, 00:14:21.349 "data_size": 63488 00:14:21.349 }, 00:14:21.349 { 00:14:21.349 "name": "pt4", 00:14:21.349 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.349 "is_configured": true, 00:14:21.349 "data_offset": 2048, 00:14:21.349 "data_size": 63488 00:14:21.349 } 00:14:21.349 ] 00:14:21.349 } 00:14:21.349 } 00:14:21.349 }' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:21.349 pt2 00:14:21.349 pt3 00:14:21.349 pt4' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.349 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.608 [2024-12-05 19:34:14.925428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1cffca14-fe44-4926-a732-c55d491a9c75 '!=' 1cffca14-fe44-4926-a732-c55d491a9c75 ']' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70818 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70818 ']' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70818 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.608 19:34:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70818 00:14:21.608 killing process with pid 70818 00:14:21.608 19:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.608 19:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.608 19:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70818' 00:14:21.608 19:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70818 00:14:21.608 [2024-12-05 19:34:15.005169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.608 19:34:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70818 00:14:21.608 [2024-12-05 19:34:15.005287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.608 [2024-12-05 19:34:15.005414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.608 [2024-12-05 19:34:15.005430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:22.193 [2024-12-05 19:34:15.361070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.128 ************************************ 00:14:23.128 END TEST raid_superblock_test 00:14:23.128 ************************************ 00:14:23.128 19:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:23.128 00:14:23.128 real 0m5.929s 00:14:23.128 user 0m8.880s 00:14:23.128 sys 0m0.898s 00:14:23.128 19:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.128 19:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.128 19:34:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:23.128 19:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:23.128 19:34:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.128 19:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.128 ************************************ 00:14:23.128 START TEST raid_read_error_test 00:14:23.128 ************************************ 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mExgc5S56h 00:14:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71090 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71090 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71090 ']' 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.128 19:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.128 [2024-12-05 19:34:16.561529] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:23.128 [2024-12-05 19:34:16.561668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:14:23.386 [2024-12-05 19:34:16.738544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.647 [2024-12-05 19:34:16.866038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.647 [2024-12-05 19:34:17.067080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.647 [2024-12-05 19:34:17.067152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.215 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.216 BaseBdev1_malloc 00:14:24.216 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 true 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 [2024-12-05 19:34:17.668845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:24.476 [2024-12-05 19:34:17.669135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.476 [2024-12-05 19:34:17.669176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:24.476 [2024-12-05 19:34:17.669197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.476 [2024-12-05 19:34:17.672074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.476 BaseBdev1 00:14:24.476 [2024-12-05 19:34:17.672266] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 BaseBdev2_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 true 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 [2024-12-05 19:34:17.722599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:24.476 [2024-12-05 19:34:17.722689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.476 [2024-12-05 19:34:17.722729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:24.476 [2024-12-05 19:34:17.722776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.476 [2024-12-05 19:34:17.725506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.476 [2024-12-05 19:34:17.725564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.476 BaseBdev2 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 BaseBdev3_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 true 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 [2024-12-05 19:34:17.796783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:24.476 [2024-12-05 19:34:17.796846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.476 [2024-12-05 19:34:17.796872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:24.476 [2024-12-05 19:34:17.796889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.476 [2024-12-05 19:34:17.799753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.476 [2024-12-05 19:34:17.799800] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:24.476 BaseBdev3 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 BaseBdev4_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 true 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.476 [2024-12-05 19:34:17.858191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:24.476 [2024-12-05 19:34:17.858266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.476 [2024-12-05 19:34:17.858295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:24.476 [2024-12-05 19:34:17.858314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.476 [2024-12-05 19:34:17.861217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.476 [2024-12-05 19:34:17.861293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:24.476 BaseBdev4 00:14:24.476 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.477 [2024-12-05 19:34:17.870293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.477 [2024-12-05 19:34:17.872837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.477 [2024-12-05 19:34:17.872938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.477 [2024-12-05 19:34:17.873081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.477 [2024-12-05 19:34:17.873377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:24.477 [2024-12-05 19:34:17.873414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.477 [2024-12-05 19:34:17.873744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:24.477 [2024-12-05 19:34:17.873967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:24.477 [2024-12-05 19:34:17.873994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:24.477 [2024-12-05 19:34:17.874226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.477 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.736 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.736 "name": "raid_bdev1", 00:14:24.736 "uuid": "466ec209-ebb3-4f13-ad94-b99440e1059b", 00:14:24.736 "strip_size_kb": 64, 00:14:24.736 "state": "online", 00:14:24.736 "raid_level": "raid0", 00:14:24.736 "superblock": true, 00:14:24.736 "num_base_bdevs": 4, 00:14:24.736 "num_base_bdevs_discovered": 4, 00:14:24.736 "num_base_bdevs_operational": 4, 00:14:24.736 "base_bdevs_list": [ 00:14:24.736 { 00:14:24.736 "name": "BaseBdev1", 00:14:24.736 "uuid": "4f238070-cae0-5d57-87d7-28b3b2e6fa9f", 00:14:24.736 "is_configured": true, 00:14:24.736 "data_offset": 2048, 00:14:24.736 "data_size": 63488 00:14:24.736 }, 00:14:24.736 { 00:14:24.736 "name": "BaseBdev2", 00:14:24.736 "uuid": "7b92b66c-b650-57cc-a6f9-9683668ea4b5", 00:14:24.736 "is_configured": true, 00:14:24.736 "data_offset": 2048, 00:14:24.736 "data_size": 63488 00:14:24.736 }, 00:14:24.736 { 00:14:24.736 "name": "BaseBdev3", 00:14:24.736 "uuid": "daf52d01-0bb7-5932-bfb6-51ee2640bece", 00:14:24.736 "is_configured": true, 00:14:24.736 "data_offset": 2048, 00:14:24.736 "data_size": 63488 00:14:24.736 }, 00:14:24.736 { 00:14:24.736 "name": "BaseBdev4", 00:14:24.736 "uuid": "6f91d9cc-69fd-59c2-b028-fe11d4fbc01a", 00:14:24.736 "is_configured": true, 00:14:24.736 "data_offset": 2048, 00:14:24.736 "data_size": 63488 00:14:24.736 } 00:14:24.736 ] 00:14:24.736 }' 00:14:24.736 19:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.736 19:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.995 19:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:24.995 19:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:25.254 [2024-12-05 19:34:18.531950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.192 "name": "raid_bdev1", 00:14:26.192 "uuid": "466ec209-ebb3-4f13-ad94-b99440e1059b", 00:14:26.192 "strip_size_kb": 64, 00:14:26.192 "state": "online", 00:14:26.192 "raid_level": "raid0", 00:14:26.192 "superblock": true, 00:14:26.192 "num_base_bdevs": 4, 00:14:26.192 "num_base_bdevs_discovered": 4, 00:14:26.192 "num_base_bdevs_operational": 4, 00:14:26.192 "base_bdevs_list": [ 00:14:26.192 { 00:14:26.192 "name": "BaseBdev1", 00:14:26.192 "uuid": "4f238070-cae0-5d57-87d7-28b3b2e6fa9f", 00:14:26.192 "is_configured": true, 00:14:26.192 "data_offset": 2048, 00:14:26.192 "data_size": 63488 00:14:26.192 }, 00:14:26.192 { 00:14:26.192 "name": "BaseBdev2", 00:14:26.192 "uuid": "7b92b66c-b650-57cc-a6f9-9683668ea4b5", 00:14:26.192 "is_configured": true, 00:14:26.192 "data_offset": 2048, 00:14:26.192 "data_size": 63488 00:14:26.192 }, 00:14:26.192 { 00:14:26.192 "name": "BaseBdev3", 00:14:26.192 "uuid": "daf52d01-0bb7-5932-bfb6-51ee2640bece", 00:14:26.192 "is_configured": true, 00:14:26.192 "data_offset": 2048, 00:14:26.192 "data_size": 63488 00:14:26.192 }, 00:14:26.192 { 00:14:26.192 "name": "BaseBdev4", 00:14:26.192 "uuid": "6f91d9cc-69fd-59c2-b028-fe11d4fbc01a", 00:14:26.192 "is_configured": true, 00:14:26.192 "data_offset": 2048, 00:14:26.192 "data_size": 63488 00:14:26.192 } 00:14:26.192 ] 00:14:26.192 }' 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.192 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.761 [2024-12-05 19:34:19.942900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.761 [2024-12-05 19:34:19.942943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.761 [2024-12-05 19:34:19.946538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.761 [2024-12-05 19:34:19.946630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.761 [2024-12-05 19:34:19.946689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.761 [2024-12-05 19:34:19.946709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:26.761 { 00:14:26.761 "results": [ 00:14:26.761 { 00:14:26.761 "job": "raid_bdev1", 00:14:26.761 "core_mask": "0x1", 00:14:26.761 "workload": "randrw", 00:14:26.761 "percentage": 50, 00:14:26.761 "status": "finished", 00:14:26.761 "queue_depth": 1, 00:14:26.761 "io_size": 131072, 00:14:26.761 "runtime": 1.408462, 00:14:26.761 "iops": 10228.17797001268, 00:14:26.761 "mibps": 1278.522246251585, 00:14:26.761 "io_failed": 1, 00:14:26.761 "io_timeout": 0, 00:14:26.761 "avg_latency_us": 135.8532006537226, 00:14:26.761 "min_latency_us": 38.4, 00:14:26.761 "max_latency_us": 1884.16 00:14:26.761 } 00:14:26.761 ], 00:14:26.761 "core_count": 1 00:14:26.761 } 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71090 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71090 ']' 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71090 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.761 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71090 00:14:26.761 killing process with pid 71090 00:14:26.762 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.762 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.762 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71090' 00:14:26.762 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71090 00:14:26.762 [2024-12-05 19:34:19.980160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.762 19:34:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71090 00:14:27.022 [2024-12-05 19:34:20.273849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mExgc5S56h 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:27.961 ************************************ 00:14:27.961 END TEST raid_read_error_test 00:14:27.961 ************************************ 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:27.961 00:14:27.961 real 0m4.940s 00:14:27.961 user 0m6.145s 00:14:27.961 sys 0m0.596s 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.961 19:34:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.221 19:34:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:28.221 19:34:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:28.221 19:34:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.221 19:34:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.221 ************************************ 00:14:28.221 START TEST raid_write_error_test 00:14:28.221 ************************************ 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:28.221 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i6kTzMF3iP 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71237 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71237 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71237 ']' 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.222 19:34:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.222 [2024-12-05 19:34:21.576153] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:28.222 [2024-12-05 19:34:21.576347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:14:28.481 [2024-12-05 19:34:21.758441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.481 [2024-12-05 19:34:21.892586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.740 [2024-12-05 19:34:22.107413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.740 [2024-12-05 19:34:22.107499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 BaseBdev1_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 true 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 [2024-12-05 19:34:22.644241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:29.309 [2024-12-05 19:34:22.644320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.309 [2024-12-05 19:34:22.644350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:29.309 [2024-12-05 19:34:22.644367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.309 [2024-12-05 19:34:22.647098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.309 [2024-12-05 19:34:22.647145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.309 BaseBdev1 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 BaseBdev2_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 true 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 [2024-12-05 19:34:22.702162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:29.309 [2024-12-05 19:34:22.702273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.309 [2024-12-05 19:34:22.702311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:29.309 [2024-12-05 19:34:22.702332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.309 [2024-12-05 19:34:22.705202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.309 [2024-12-05 19:34:22.705263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.309 BaseBdev2 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.309 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 BaseBdev3_malloc 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 true 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 [2024-12-05 19:34:22.777931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:29.571 [2024-12-05 19:34:22.778133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.571 [2024-12-05 19:34:22.778171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:29.571 [2024-12-05 19:34:22.778190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.571 [2024-12-05 19:34:22.781155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.571 [2024-12-05 19:34:22.781207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.571 BaseBdev3 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 BaseBdev4_malloc 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 true 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.571 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.571 [2024-12-05 19:34:22.840106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:29.571 [2024-12-05 19:34:22.840375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.572 [2024-12-05 19:34:22.840414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.572 [2024-12-05 19:34:22.840433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.572 [2024-12-05 19:34:22.843395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.572 BaseBdev4 00:14:29.572 [2024-12-05 19:34:22.843569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.572 [2024-12-05 19:34:22.848318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.572 [2024-12-05 19:34:22.850867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.572 [2024-12-05 19:34:22.850975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.572 [2024-12-05 19:34:22.851075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.572 [2024-12-05 19:34:22.851402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:29.572 [2024-12-05 19:34:22.851429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:29.572 [2024-12-05 19:34:22.851781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:29.572 [2024-12-05 19:34:22.852025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:29.572 [2024-12-05 19:34:22.852049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:29.572 [2024-12-05 19:34:22.852323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.572 "name": "raid_bdev1", 00:14:29.572 "uuid": "3707577a-4dd3-41c6-a8ea-f4adf6f59cb2", 00:14:29.572 "strip_size_kb": 64, 00:14:29.572 "state": "online", 00:14:29.572 "raid_level": "raid0", 00:14:29.572 "superblock": true, 00:14:29.572 "num_base_bdevs": 4, 00:14:29.572 "num_base_bdevs_discovered": 4, 00:14:29.572 "num_base_bdevs_operational": 4, 00:14:29.572 "base_bdevs_list": [ 00:14:29.572 { 00:14:29.572 "name": "BaseBdev1", 00:14:29.572 "uuid": "a6cb2779-7ec0-53fa-94ed-3c201005214a", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev2", 00:14:29.572 "uuid": "036ff44a-c42d-5086-b4d7-475a9c7b2904", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev3", 00:14:29.572 "uuid": "347298e4-6282-59ea-8102-21a1d052ceb0", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 }, 00:14:29.572 { 00:14:29.572 "name": "BaseBdev4", 00:14:29.572 "uuid": "174ddaab-57d6-51ad-ae44-35d9f6c4546f", 00:14:29.572 "is_configured": true, 00:14:29.572 "data_offset": 2048, 00:14:29.572 "data_size": 63488 00:14:29.572 } 00:14:29.572 ] 00:14:29.572 }' 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.572 19:34:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.149 19:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:30.149 19:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:30.149 [2024-12-05 19:34:23.457944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.086 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.087 "name": "raid_bdev1", 00:14:31.087 "uuid": "3707577a-4dd3-41c6-a8ea-f4adf6f59cb2", 00:14:31.087 "strip_size_kb": 64, 00:14:31.087 "state": "online", 00:14:31.087 "raid_level": "raid0", 00:14:31.087 "superblock": true, 00:14:31.087 "num_base_bdevs": 4, 00:14:31.087 "num_base_bdevs_discovered": 4, 00:14:31.087 "num_base_bdevs_operational": 4, 00:14:31.087 "base_bdevs_list": [ 00:14:31.087 { 00:14:31.087 "name": "BaseBdev1", 00:14:31.087 "uuid": "a6cb2779-7ec0-53fa-94ed-3c201005214a", 00:14:31.087 "is_configured": true, 00:14:31.087 "data_offset": 2048, 00:14:31.087 "data_size": 63488 00:14:31.087 }, 00:14:31.087 { 00:14:31.087 "name": "BaseBdev2", 00:14:31.087 "uuid": "036ff44a-c42d-5086-b4d7-475a9c7b2904", 00:14:31.087 "is_configured": true, 00:14:31.087 "data_offset": 2048, 00:14:31.087 "data_size": 63488 00:14:31.087 }, 00:14:31.087 { 00:14:31.087 "name": "BaseBdev3", 00:14:31.087 "uuid": "347298e4-6282-59ea-8102-21a1d052ceb0", 00:14:31.087 "is_configured": true, 00:14:31.087 "data_offset": 2048, 00:14:31.087 "data_size": 63488 00:14:31.087 }, 00:14:31.087 { 00:14:31.087 "name": "BaseBdev4", 00:14:31.087 "uuid": "174ddaab-57d6-51ad-ae44-35d9f6c4546f", 00:14:31.087 "is_configured": true, 00:14:31.087 "data_offset": 2048, 00:14:31.087 "data_size": 63488 00:14:31.087 } 00:14:31.087 ] 00:14:31.087 }' 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.087 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.656 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.656 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.656 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.656 [2024-12-05 19:34:24.888256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.656 [2024-12-05 19:34:24.888294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.656 [2024-12-05 19:34:24.891997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.656 [2024-12-05 19:34:24.892271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.657 [2024-12-05 19:34:24.892378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.657 [2024-12-05 19:34:24.892600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:31.657 { 00:14:31.657 "results": [ 00:14:31.657 { 00:14:31.657 "job": "raid_bdev1", 00:14:31.657 "core_mask": "0x1", 00:14:31.657 "workload": "randrw", 00:14:31.657 "percentage": 50, 00:14:31.657 "status": "finished", 00:14:31.657 "queue_depth": 1, 00:14:31.657 "io_size": 131072, 00:14:31.657 "runtime": 1.427553, 00:14:31.657 "iops": 10115.911633403453, 00:14:31.657 "mibps": 1264.4889541754317, 00:14:31.657 "io_failed": 1, 00:14:31.657 "io_timeout": 0, 00:14:31.657 "avg_latency_us": 138.22281376288853, 00:14:31.657 "min_latency_us": 39.33090909090909, 00:14:31.657 "max_latency_us": 1876.7127272727273 00:14:31.657 } 00:14:31.657 ], 00:14:31.657 "core_count": 1 00:14:31.657 } 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71237 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71237 ']' 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71237 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71237 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.657 killing process with pid 71237 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71237' 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71237 00:14:31.657 [2024-12-05 19:34:24.929152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.657 19:34:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71237 00:14:31.916 [2024-12-05 19:34:25.223260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i6kTzMF3iP 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:33.294 00:14:33.294 real 0m4.871s 00:14:33.294 user 0m5.977s 00:14:33.294 sys 0m0.620s 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.294 ************************************ 00:14:33.294 END TEST raid_write_error_test 00:14:33.294 ************************************ 00:14:33.294 19:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.294 19:34:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:33.294 19:34:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:33.294 19:34:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.294 19:34:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.294 19:34:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.294 ************************************ 00:14:33.294 START TEST raid_state_function_test 00:14:33.294 ************************************ 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:33.294 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71386 00:14:33.295 Process raid pid: 71386 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71386' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71386 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71386 ']' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.295 19:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.295 [2024-12-05 19:34:26.501652] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:33.295 [2024-12-05 19:34:26.501908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.295 [2024-12-05 19:34:26.690968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.554 [2024-12-05 19:34:26.821428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.813 [2024-12-05 19:34:27.033682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.813 [2024-12-05 19:34:27.033747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.072 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.072 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.072 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:34.072 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.072 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.331 [2024-12-05 19:34:27.515573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.331 [2024-12-05 19:34:27.515643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.331 [2024-12-05 19:34:27.515672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.331 [2024-12-05 19:34:27.515691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.331 [2024-12-05 19:34:27.515715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.331 [2024-12-05 19:34:27.515734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.331 [2024-12-05 19:34:27.515744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.331 [2024-12-05 19:34:27.515758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.331 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.331 "name": "Existed_Raid", 00:14:34.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.331 "strip_size_kb": 64, 00:14:34.331 "state": "configuring", 00:14:34.331 "raid_level": "concat", 00:14:34.331 "superblock": false, 00:14:34.331 "num_base_bdevs": 4, 00:14:34.331 "num_base_bdevs_discovered": 0, 00:14:34.331 "num_base_bdevs_operational": 4, 00:14:34.332 "base_bdevs_list": [ 00:14:34.332 { 00:14:34.332 "name": "BaseBdev1", 00:14:34.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.332 "is_configured": false, 00:14:34.332 "data_offset": 0, 00:14:34.332 "data_size": 0 00:14:34.332 }, 00:14:34.332 { 00:14:34.332 "name": "BaseBdev2", 00:14:34.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.332 "is_configured": false, 00:14:34.332 "data_offset": 0, 00:14:34.332 "data_size": 0 00:14:34.332 }, 00:14:34.332 { 00:14:34.332 "name": "BaseBdev3", 00:14:34.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.332 "is_configured": false, 00:14:34.332 "data_offset": 0, 00:14:34.332 "data_size": 0 00:14:34.332 }, 00:14:34.332 { 00:14:34.332 "name": "BaseBdev4", 00:14:34.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.332 "is_configured": false, 00:14:34.332 "data_offset": 0, 00:14:34.332 "data_size": 0 00:14:34.332 } 00:14:34.332 ] 00:14:34.332 }' 00:14:34.332 19:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.332 19:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 [2024-12-05 19:34:28.055698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.899 [2024-12-05 19:34:28.055765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 [2024-12-05 19:34:28.067650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.899 [2024-12-05 19:34:28.067748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.899 [2024-12-05 19:34:28.067765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.899 [2024-12-05 19:34:28.067782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.899 [2024-12-05 19:34:28.067791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.899 [2024-12-05 19:34:28.067806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.899 [2024-12-05 19:34:28.067815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.899 [2024-12-05 19:34:28.067829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 [2024-12-05 19:34:28.116052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.899 BaseBdev1 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.899 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.899 [ 00:14:34.899 { 00:14:34.899 "name": "BaseBdev1", 00:14:34.899 "aliases": [ 00:14:34.899 "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef" 00:14:34.899 ], 00:14:34.899 "product_name": "Malloc disk", 00:14:34.899 "block_size": 512, 00:14:34.899 "num_blocks": 65536, 00:14:34.899 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:34.899 "assigned_rate_limits": { 00:14:34.899 "rw_ios_per_sec": 0, 00:14:34.899 "rw_mbytes_per_sec": 0, 00:14:34.899 "r_mbytes_per_sec": 0, 00:14:34.899 "w_mbytes_per_sec": 0 00:14:34.899 }, 00:14:34.899 "claimed": true, 00:14:34.899 "claim_type": "exclusive_write", 00:14:34.899 "zoned": false, 00:14:34.899 "supported_io_types": { 00:14:34.899 "read": true, 00:14:34.899 "write": true, 00:14:34.899 "unmap": true, 00:14:34.899 "flush": true, 00:14:34.899 "reset": true, 00:14:34.899 "nvme_admin": false, 00:14:34.899 "nvme_io": false, 00:14:34.899 "nvme_io_md": false, 00:14:34.899 "write_zeroes": true, 00:14:34.899 "zcopy": true, 00:14:34.899 "get_zone_info": false, 00:14:34.899 "zone_management": false, 00:14:34.900 "zone_append": false, 00:14:34.900 "compare": false, 00:14:34.900 "compare_and_write": false, 00:14:34.900 "abort": true, 00:14:34.900 "seek_hole": false, 00:14:34.900 "seek_data": false, 00:14:34.900 "copy": true, 00:14:34.900 "nvme_iov_md": false 00:14:34.900 }, 00:14:34.900 "memory_domains": [ 00:14:34.900 { 00:14:34.900 "dma_device_id": "system", 00:14:34.900 "dma_device_type": 1 00:14:34.900 }, 00:14:34.900 { 00:14:34.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.900 "dma_device_type": 2 00:14:34.900 } 00:14:34.900 ], 00:14:34.900 "driver_specific": {} 00:14:34.900 } 00:14:34.900 ] 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.900 "name": "Existed_Raid", 00:14:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.900 "strip_size_kb": 64, 00:14:34.900 "state": "configuring", 00:14:34.900 "raid_level": "concat", 00:14:34.900 "superblock": false, 00:14:34.900 "num_base_bdevs": 4, 00:14:34.900 "num_base_bdevs_discovered": 1, 00:14:34.900 "num_base_bdevs_operational": 4, 00:14:34.900 "base_bdevs_list": [ 00:14:34.900 { 00:14:34.900 "name": "BaseBdev1", 00:14:34.900 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:34.900 "is_configured": true, 00:14:34.900 "data_offset": 0, 00:14:34.900 "data_size": 65536 00:14:34.900 }, 00:14:34.900 { 00:14:34.900 "name": "BaseBdev2", 00:14:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.900 "is_configured": false, 00:14:34.900 "data_offset": 0, 00:14:34.900 "data_size": 0 00:14:34.900 }, 00:14:34.900 { 00:14:34.900 "name": "BaseBdev3", 00:14:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.900 "is_configured": false, 00:14:34.900 "data_offset": 0, 00:14:34.900 "data_size": 0 00:14:34.900 }, 00:14:34.900 { 00:14:34.900 "name": "BaseBdev4", 00:14:34.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.900 "is_configured": false, 00:14:34.900 "data_offset": 0, 00:14:34.900 "data_size": 0 00:14:34.900 } 00:14:34.900 ] 00:14:34.900 }' 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.900 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.467 [2024-12-05 19:34:28.708305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.467 [2024-12-05 19:34:28.708405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.467 [2024-12-05 19:34:28.720352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.467 [2024-12-05 19:34:28.722836] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.467 [2024-12-05 19:34:28.722909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.467 [2024-12-05 19:34:28.722926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.467 [2024-12-05 19:34:28.722943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.467 [2024-12-05 19:34:28.722954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.467 [2024-12-05 19:34:28.722967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.467 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.467 "name": "Existed_Raid", 00:14:35.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.467 "strip_size_kb": 64, 00:14:35.467 "state": "configuring", 00:14:35.467 "raid_level": "concat", 00:14:35.467 "superblock": false, 00:14:35.467 "num_base_bdevs": 4, 00:14:35.467 "num_base_bdevs_discovered": 1, 00:14:35.467 "num_base_bdevs_operational": 4, 00:14:35.467 "base_bdevs_list": [ 00:14:35.467 { 00:14:35.467 "name": "BaseBdev1", 00:14:35.467 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:35.468 "is_configured": true, 00:14:35.468 "data_offset": 0, 00:14:35.468 "data_size": 65536 00:14:35.468 }, 00:14:35.468 { 00:14:35.468 "name": "BaseBdev2", 00:14:35.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.468 "is_configured": false, 00:14:35.468 "data_offset": 0, 00:14:35.468 "data_size": 0 00:14:35.468 }, 00:14:35.468 { 00:14:35.468 "name": "BaseBdev3", 00:14:35.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.468 "is_configured": false, 00:14:35.468 "data_offset": 0, 00:14:35.468 "data_size": 0 00:14:35.468 }, 00:14:35.468 { 00:14:35.468 "name": "BaseBdev4", 00:14:35.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.468 "is_configured": false, 00:14:35.468 "data_offset": 0, 00:14:35.468 "data_size": 0 00:14:35.468 } 00:14:35.468 ] 00:14:35.468 }' 00:14:35.468 19:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.468 19:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 [2024-12-05 19:34:29.266104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.036 BaseBdev2 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 [ 00:14:36.036 { 00:14:36.036 "name": "BaseBdev2", 00:14:36.036 "aliases": [ 00:14:36.036 "6cea6b16-c6fc-428f-b68e-7023dd242b57" 00:14:36.036 ], 00:14:36.036 "product_name": "Malloc disk", 00:14:36.036 "block_size": 512, 00:14:36.036 "num_blocks": 65536, 00:14:36.036 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:36.036 "assigned_rate_limits": { 00:14:36.036 "rw_ios_per_sec": 0, 00:14:36.036 "rw_mbytes_per_sec": 0, 00:14:36.036 "r_mbytes_per_sec": 0, 00:14:36.036 "w_mbytes_per_sec": 0 00:14:36.036 }, 00:14:36.036 "claimed": true, 00:14:36.036 "claim_type": "exclusive_write", 00:14:36.036 "zoned": false, 00:14:36.036 "supported_io_types": { 00:14:36.036 "read": true, 00:14:36.036 "write": true, 00:14:36.036 "unmap": true, 00:14:36.036 "flush": true, 00:14:36.036 "reset": true, 00:14:36.036 "nvme_admin": false, 00:14:36.036 "nvme_io": false, 00:14:36.036 "nvme_io_md": false, 00:14:36.036 "write_zeroes": true, 00:14:36.036 "zcopy": true, 00:14:36.036 "get_zone_info": false, 00:14:36.036 "zone_management": false, 00:14:36.036 "zone_append": false, 00:14:36.036 "compare": false, 00:14:36.036 "compare_and_write": false, 00:14:36.036 "abort": true, 00:14:36.036 "seek_hole": false, 00:14:36.036 "seek_data": false, 00:14:36.036 "copy": true, 00:14:36.036 "nvme_iov_md": false 00:14:36.036 }, 00:14:36.036 "memory_domains": [ 00:14:36.036 { 00:14:36.036 "dma_device_id": "system", 00:14:36.036 "dma_device_type": 1 00:14:36.036 }, 00:14:36.036 { 00:14:36.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.036 "dma_device_type": 2 00:14:36.036 } 00:14:36.036 ], 00:14:36.036 "driver_specific": {} 00:14:36.036 } 00:14:36.036 ] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.036 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.036 "name": "Existed_Raid", 00:14:36.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.036 "strip_size_kb": 64, 00:14:36.036 "state": "configuring", 00:14:36.036 "raid_level": "concat", 00:14:36.036 "superblock": false, 00:14:36.036 "num_base_bdevs": 4, 00:14:36.036 "num_base_bdevs_discovered": 2, 00:14:36.036 "num_base_bdevs_operational": 4, 00:14:36.036 "base_bdevs_list": [ 00:14:36.036 { 00:14:36.036 "name": "BaseBdev1", 00:14:36.036 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:36.036 "is_configured": true, 00:14:36.036 "data_offset": 0, 00:14:36.036 "data_size": 65536 00:14:36.036 }, 00:14:36.036 { 00:14:36.036 "name": "BaseBdev2", 00:14:36.036 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:36.036 "is_configured": true, 00:14:36.036 "data_offset": 0, 00:14:36.037 "data_size": 65536 00:14:36.037 }, 00:14:36.037 { 00:14:36.037 "name": "BaseBdev3", 00:14:36.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.037 "is_configured": false, 00:14:36.037 "data_offset": 0, 00:14:36.037 "data_size": 0 00:14:36.037 }, 00:14:36.037 { 00:14:36.037 "name": "BaseBdev4", 00:14:36.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.037 "is_configured": false, 00:14:36.037 "data_offset": 0, 00:14:36.037 "data_size": 0 00:14:36.037 } 00:14:36.037 ] 00:14:36.037 }' 00:14:36.037 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.037 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.611 [2024-12-05 19:34:29.902391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.611 BaseBdev3 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.611 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.611 [ 00:14:36.611 { 00:14:36.611 "name": "BaseBdev3", 00:14:36.611 "aliases": [ 00:14:36.611 "3e7d2dbd-f0de-4586-8b51-9733370417f0" 00:14:36.611 ], 00:14:36.611 "product_name": "Malloc disk", 00:14:36.611 "block_size": 512, 00:14:36.611 "num_blocks": 65536, 00:14:36.611 "uuid": "3e7d2dbd-f0de-4586-8b51-9733370417f0", 00:14:36.611 "assigned_rate_limits": { 00:14:36.611 "rw_ios_per_sec": 0, 00:14:36.611 "rw_mbytes_per_sec": 0, 00:14:36.611 "r_mbytes_per_sec": 0, 00:14:36.611 "w_mbytes_per_sec": 0 00:14:36.611 }, 00:14:36.611 "claimed": true, 00:14:36.611 "claim_type": "exclusive_write", 00:14:36.612 "zoned": false, 00:14:36.612 "supported_io_types": { 00:14:36.612 "read": true, 00:14:36.612 "write": true, 00:14:36.612 "unmap": true, 00:14:36.612 "flush": true, 00:14:36.612 "reset": true, 00:14:36.612 "nvme_admin": false, 00:14:36.612 "nvme_io": false, 00:14:36.612 "nvme_io_md": false, 00:14:36.612 "write_zeroes": true, 00:14:36.612 "zcopy": true, 00:14:36.612 "get_zone_info": false, 00:14:36.612 "zone_management": false, 00:14:36.612 "zone_append": false, 00:14:36.612 "compare": false, 00:14:36.612 "compare_and_write": false, 00:14:36.612 "abort": true, 00:14:36.612 "seek_hole": false, 00:14:36.612 "seek_data": false, 00:14:36.612 "copy": true, 00:14:36.612 "nvme_iov_md": false 00:14:36.612 }, 00:14:36.612 "memory_domains": [ 00:14:36.612 { 00:14:36.612 "dma_device_id": "system", 00:14:36.612 "dma_device_type": 1 00:14:36.612 }, 00:14:36.612 { 00:14:36.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.612 "dma_device_type": 2 00:14:36.612 } 00:14:36.612 ], 00:14:36.612 "driver_specific": {} 00:14:36.612 } 00:14:36.612 ] 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.612 19:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.612 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.612 "name": "Existed_Raid", 00:14:36.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.612 "strip_size_kb": 64, 00:14:36.612 "state": "configuring", 00:14:36.612 "raid_level": "concat", 00:14:36.612 "superblock": false, 00:14:36.612 "num_base_bdevs": 4, 00:14:36.612 "num_base_bdevs_discovered": 3, 00:14:36.612 "num_base_bdevs_operational": 4, 00:14:36.612 "base_bdevs_list": [ 00:14:36.612 { 00:14:36.612 "name": "BaseBdev1", 00:14:36.612 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:36.612 "is_configured": true, 00:14:36.612 "data_offset": 0, 00:14:36.612 "data_size": 65536 00:14:36.612 }, 00:14:36.612 { 00:14:36.612 "name": "BaseBdev2", 00:14:36.612 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:36.612 "is_configured": true, 00:14:36.612 "data_offset": 0, 00:14:36.612 "data_size": 65536 00:14:36.612 }, 00:14:36.612 { 00:14:36.612 "name": "BaseBdev3", 00:14:36.612 "uuid": "3e7d2dbd-f0de-4586-8b51-9733370417f0", 00:14:36.612 "is_configured": true, 00:14:36.612 "data_offset": 0, 00:14:36.612 "data_size": 65536 00:14:36.612 }, 00:14:36.612 { 00:14:36.612 "name": "BaseBdev4", 00:14:36.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.612 "is_configured": false, 00:14:36.612 "data_offset": 0, 00:14:36.612 "data_size": 0 00:14:36.612 } 00:14:36.612 ] 00:14:36.612 }' 00:14:36.612 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.612 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 [2024-12-05 19:34:30.520288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.178 [2024-12-05 19:34:30.520364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:37.178 [2024-12-05 19:34:30.520377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:37.178 [2024-12-05 19:34:30.520747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:37.178 [2024-12-05 19:34:30.521007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:37.178 [2024-12-05 19:34:30.521039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:37.178 [2024-12-05 19:34:30.521358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.178 BaseBdev4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 [ 00:14:37.178 { 00:14:37.178 "name": "BaseBdev4", 00:14:37.178 "aliases": [ 00:14:37.178 "a04ce449-d939-445a-a822-2da3980f6a14" 00:14:37.178 ], 00:14:37.178 "product_name": "Malloc disk", 00:14:37.178 "block_size": 512, 00:14:37.178 "num_blocks": 65536, 00:14:37.178 "uuid": "a04ce449-d939-445a-a822-2da3980f6a14", 00:14:37.178 "assigned_rate_limits": { 00:14:37.178 "rw_ios_per_sec": 0, 00:14:37.178 "rw_mbytes_per_sec": 0, 00:14:37.178 "r_mbytes_per_sec": 0, 00:14:37.178 "w_mbytes_per_sec": 0 00:14:37.178 }, 00:14:37.178 "claimed": true, 00:14:37.178 "claim_type": "exclusive_write", 00:14:37.178 "zoned": false, 00:14:37.178 "supported_io_types": { 00:14:37.178 "read": true, 00:14:37.178 "write": true, 00:14:37.178 "unmap": true, 00:14:37.178 "flush": true, 00:14:37.178 "reset": true, 00:14:37.178 "nvme_admin": false, 00:14:37.178 "nvme_io": false, 00:14:37.178 "nvme_io_md": false, 00:14:37.178 "write_zeroes": true, 00:14:37.178 "zcopy": true, 00:14:37.178 "get_zone_info": false, 00:14:37.178 "zone_management": false, 00:14:37.178 "zone_append": false, 00:14:37.178 "compare": false, 00:14:37.178 "compare_and_write": false, 00:14:37.178 "abort": true, 00:14:37.178 "seek_hole": false, 00:14:37.178 "seek_data": false, 00:14:37.178 "copy": true, 00:14:37.178 "nvme_iov_md": false 00:14:37.178 }, 00:14:37.178 "memory_domains": [ 00:14:37.178 { 00:14:37.178 "dma_device_id": "system", 00:14:37.178 "dma_device_type": 1 00:14:37.178 }, 00:14:37.178 { 00:14:37.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.178 "dma_device_type": 2 00:14:37.178 } 00:14:37.178 ], 00:14:37.178 "driver_specific": {} 00:14:37.178 } 00:14:37.178 ] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.178 "name": "Existed_Raid", 00:14:37.178 "uuid": "aaf06c3a-84b6-49d4-a18c-a09ee1f31a22", 00:14:37.178 "strip_size_kb": 64, 00:14:37.178 "state": "online", 00:14:37.178 "raid_level": "concat", 00:14:37.178 "superblock": false, 00:14:37.178 "num_base_bdevs": 4, 00:14:37.178 "num_base_bdevs_discovered": 4, 00:14:37.178 "num_base_bdevs_operational": 4, 00:14:37.178 "base_bdevs_list": [ 00:14:37.178 { 00:14:37.178 "name": "BaseBdev1", 00:14:37.178 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:37.178 "is_configured": true, 00:14:37.178 "data_offset": 0, 00:14:37.178 "data_size": 65536 00:14:37.178 }, 00:14:37.178 { 00:14:37.178 "name": "BaseBdev2", 00:14:37.178 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:37.178 "is_configured": true, 00:14:37.178 "data_offset": 0, 00:14:37.178 "data_size": 65536 00:14:37.178 }, 00:14:37.178 { 00:14:37.178 "name": "BaseBdev3", 00:14:37.178 "uuid": "3e7d2dbd-f0de-4586-8b51-9733370417f0", 00:14:37.178 "is_configured": true, 00:14:37.178 "data_offset": 0, 00:14:37.178 "data_size": 65536 00:14:37.178 }, 00:14:37.178 { 00:14:37.178 "name": "BaseBdev4", 00:14:37.178 "uuid": "a04ce449-d939-445a-a822-2da3980f6a14", 00:14:37.178 "is_configured": true, 00:14:37.178 "data_offset": 0, 00:14:37.178 "data_size": 65536 00:14:37.178 } 00:14:37.178 ] 00:14:37.178 }' 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.178 19:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.745 [2024-12-05 19:34:31.081041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.745 "name": "Existed_Raid", 00:14:37.745 "aliases": [ 00:14:37.745 "aaf06c3a-84b6-49d4-a18c-a09ee1f31a22" 00:14:37.745 ], 00:14:37.745 "product_name": "Raid Volume", 00:14:37.745 "block_size": 512, 00:14:37.745 "num_blocks": 262144, 00:14:37.745 "uuid": "aaf06c3a-84b6-49d4-a18c-a09ee1f31a22", 00:14:37.745 "assigned_rate_limits": { 00:14:37.745 "rw_ios_per_sec": 0, 00:14:37.745 "rw_mbytes_per_sec": 0, 00:14:37.745 "r_mbytes_per_sec": 0, 00:14:37.745 "w_mbytes_per_sec": 0 00:14:37.745 }, 00:14:37.745 "claimed": false, 00:14:37.745 "zoned": false, 00:14:37.745 "supported_io_types": { 00:14:37.745 "read": true, 00:14:37.745 "write": true, 00:14:37.745 "unmap": true, 00:14:37.745 "flush": true, 00:14:37.745 "reset": true, 00:14:37.745 "nvme_admin": false, 00:14:37.745 "nvme_io": false, 00:14:37.745 "nvme_io_md": false, 00:14:37.745 "write_zeroes": true, 00:14:37.745 "zcopy": false, 00:14:37.745 "get_zone_info": false, 00:14:37.745 "zone_management": false, 00:14:37.745 "zone_append": false, 00:14:37.745 "compare": false, 00:14:37.745 "compare_and_write": false, 00:14:37.745 "abort": false, 00:14:37.745 "seek_hole": false, 00:14:37.745 "seek_data": false, 00:14:37.745 "copy": false, 00:14:37.745 "nvme_iov_md": false 00:14:37.745 }, 00:14:37.745 "memory_domains": [ 00:14:37.745 { 00:14:37.745 "dma_device_id": "system", 00:14:37.745 "dma_device_type": 1 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.745 "dma_device_type": 2 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "system", 00:14:37.745 "dma_device_type": 1 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.745 "dma_device_type": 2 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "system", 00:14:37.745 "dma_device_type": 1 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.745 "dma_device_type": 2 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "system", 00:14:37.745 "dma_device_type": 1 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.745 "dma_device_type": 2 00:14:37.745 } 00:14:37.745 ], 00:14:37.745 "driver_specific": { 00:14:37.745 "raid": { 00:14:37.745 "uuid": "aaf06c3a-84b6-49d4-a18c-a09ee1f31a22", 00:14:37.745 "strip_size_kb": 64, 00:14:37.745 "state": "online", 00:14:37.745 "raid_level": "concat", 00:14:37.745 "superblock": false, 00:14:37.745 "num_base_bdevs": 4, 00:14:37.745 "num_base_bdevs_discovered": 4, 00:14:37.745 "num_base_bdevs_operational": 4, 00:14:37.745 "base_bdevs_list": [ 00:14:37.745 { 00:14:37.745 "name": "BaseBdev1", 00:14:37.745 "uuid": "d44c4b1b-baed-4d44-b260-5ed2b6ba02ef", 00:14:37.745 "is_configured": true, 00:14:37.745 "data_offset": 0, 00:14:37.745 "data_size": 65536 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "name": "BaseBdev2", 00:14:37.745 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:37.745 "is_configured": true, 00:14:37.745 "data_offset": 0, 00:14:37.745 "data_size": 65536 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "name": "BaseBdev3", 00:14:37.745 "uuid": "3e7d2dbd-f0de-4586-8b51-9733370417f0", 00:14:37.745 "is_configured": true, 00:14:37.745 "data_offset": 0, 00:14:37.745 "data_size": 65536 00:14:37.745 }, 00:14:37.745 { 00:14:37.745 "name": "BaseBdev4", 00:14:37.745 "uuid": "a04ce449-d939-445a-a822-2da3980f6a14", 00:14:37.745 "is_configured": true, 00:14:37.745 "data_offset": 0, 00:14:37.745 "data_size": 65536 00:14:37.745 } 00:14:37.745 ] 00:14:37.745 } 00:14:37.745 } 00:14:37.745 }' 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:37.745 BaseBdev2 00:14:37.745 BaseBdev3 00:14:37.745 BaseBdev4' 00:14:37.745 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.005 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.264 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.264 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.264 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.264 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.264 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.265 [2024-12-05 19:34:31.464691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.265 [2024-12-05 19:34:31.464782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.265 [2024-12-05 19:34:31.464853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.265 "name": "Existed_Raid", 00:14:38.265 "uuid": "aaf06c3a-84b6-49d4-a18c-a09ee1f31a22", 00:14:38.265 "strip_size_kb": 64, 00:14:38.265 "state": "offline", 00:14:38.265 "raid_level": "concat", 00:14:38.265 "superblock": false, 00:14:38.265 "num_base_bdevs": 4, 00:14:38.265 "num_base_bdevs_discovered": 3, 00:14:38.265 "num_base_bdevs_operational": 3, 00:14:38.265 "base_bdevs_list": [ 00:14:38.265 { 00:14:38.265 "name": null, 00:14:38.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.265 "is_configured": false, 00:14:38.265 "data_offset": 0, 00:14:38.265 "data_size": 65536 00:14:38.265 }, 00:14:38.265 { 00:14:38.265 "name": "BaseBdev2", 00:14:38.265 "uuid": "6cea6b16-c6fc-428f-b68e-7023dd242b57", 00:14:38.265 "is_configured": true, 00:14:38.265 "data_offset": 0, 00:14:38.265 "data_size": 65536 00:14:38.265 }, 00:14:38.265 { 00:14:38.265 "name": "BaseBdev3", 00:14:38.265 "uuid": "3e7d2dbd-f0de-4586-8b51-9733370417f0", 00:14:38.265 "is_configured": true, 00:14:38.265 "data_offset": 0, 00:14:38.265 "data_size": 65536 00:14:38.265 }, 00:14:38.265 { 00:14:38.265 "name": "BaseBdev4", 00:14:38.265 "uuid": "a04ce449-d939-445a-a822-2da3980f6a14", 00:14:38.265 "is_configured": true, 00:14:38.265 "data_offset": 0, 00:14:38.265 "data_size": 65536 00:14:38.265 } 00:14:38.265 ] 00:14:38.265 }' 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.265 19:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.833 [2024-12-05 19:34:32.151630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.833 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.092 [2024-12-05 19:34:32.281222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.092 [2024-12-05 19:34:32.425071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:39.092 [2024-12-05 19:34:32.425199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.092 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 BaseBdev2 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 [ 00:14:39.352 { 00:14:39.352 "name": "BaseBdev2", 00:14:39.352 "aliases": [ 00:14:39.352 "05622ecb-45f7-4530-aeeb-f84ec5adab94" 00:14:39.352 ], 00:14:39.352 "product_name": "Malloc disk", 00:14:39.352 "block_size": 512, 00:14:39.352 "num_blocks": 65536, 00:14:39.352 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:39.352 "assigned_rate_limits": { 00:14:39.352 "rw_ios_per_sec": 0, 00:14:39.352 "rw_mbytes_per_sec": 0, 00:14:39.352 "r_mbytes_per_sec": 0, 00:14:39.352 "w_mbytes_per_sec": 0 00:14:39.352 }, 00:14:39.352 "claimed": false, 00:14:39.352 "zoned": false, 00:14:39.352 "supported_io_types": { 00:14:39.352 "read": true, 00:14:39.352 "write": true, 00:14:39.352 "unmap": true, 00:14:39.352 "flush": true, 00:14:39.352 "reset": true, 00:14:39.352 "nvme_admin": false, 00:14:39.352 "nvme_io": false, 00:14:39.352 "nvme_io_md": false, 00:14:39.352 "write_zeroes": true, 00:14:39.352 "zcopy": true, 00:14:39.352 "get_zone_info": false, 00:14:39.352 "zone_management": false, 00:14:39.352 "zone_append": false, 00:14:39.352 "compare": false, 00:14:39.352 "compare_and_write": false, 00:14:39.352 "abort": true, 00:14:39.352 "seek_hole": false, 00:14:39.352 "seek_data": false, 00:14:39.352 "copy": true, 00:14:39.352 "nvme_iov_md": false 00:14:39.352 }, 00:14:39.352 "memory_domains": [ 00:14:39.352 { 00:14:39.352 "dma_device_id": "system", 00:14:39.352 "dma_device_type": 1 00:14:39.352 }, 00:14:39.352 { 00:14:39.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.352 "dma_device_type": 2 00:14:39.352 } 00:14:39.352 ], 00:14:39.352 "driver_specific": {} 00:14:39.352 } 00:14:39.352 ] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.352 BaseBdev3 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.352 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [ 00:14:39.353 { 00:14:39.353 "name": "BaseBdev3", 00:14:39.353 "aliases": [ 00:14:39.353 "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a" 00:14:39.353 ], 00:14:39.353 "product_name": "Malloc disk", 00:14:39.353 "block_size": 512, 00:14:39.353 "num_blocks": 65536, 00:14:39.353 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:39.353 "assigned_rate_limits": { 00:14:39.353 "rw_ios_per_sec": 0, 00:14:39.353 "rw_mbytes_per_sec": 0, 00:14:39.353 "r_mbytes_per_sec": 0, 00:14:39.353 "w_mbytes_per_sec": 0 00:14:39.353 }, 00:14:39.353 "claimed": false, 00:14:39.353 "zoned": false, 00:14:39.353 "supported_io_types": { 00:14:39.353 "read": true, 00:14:39.353 "write": true, 00:14:39.353 "unmap": true, 00:14:39.353 "flush": true, 00:14:39.353 "reset": true, 00:14:39.353 "nvme_admin": false, 00:14:39.353 "nvme_io": false, 00:14:39.353 "nvme_io_md": false, 00:14:39.353 "write_zeroes": true, 00:14:39.353 "zcopy": true, 00:14:39.353 "get_zone_info": false, 00:14:39.353 "zone_management": false, 00:14:39.353 "zone_append": false, 00:14:39.353 "compare": false, 00:14:39.353 "compare_and_write": false, 00:14:39.353 "abort": true, 00:14:39.353 "seek_hole": false, 00:14:39.353 "seek_data": false, 00:14:39.353 "copy": true, 00:14:39.353 "nvme_iov_md": false 00:14:39.353 }, 00:14:39.353 "memory_domains": [ 00:14:39.353 { 00:14:39.353 "dma_device_id": "system", 00:14:39.353 "dma_device_type": 1 00:14:39.353 }, 00:14:39.353 { 00:14:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.353 "dma_device_type": 2 00:14:39.353 } 00:14:39.353 ], 00:14:39.353 "driver_specific": {} 00:14:39.353 } 00:14:39.353 ] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 BaseBdev4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [ 00:14:39.353 { 00:14:39.353 "name": "BaseBdev4", 00:14:39.353 "aliases": [ 00:14:39.353 "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096" 00:14:39.353 ], 00:14:39.353 "product_name": "Malloc disk", 00:14:39.353 "block_size": 512, 00:14:39.353 "num_blocks": 65536, 00:14:39.353 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:39.353 "assigned_rate_limits": { 00:14:39.353 "rw_ios_per_sec": 0, 00:14:39.353 "rw_mbytes_per_sec": 0, 00:14:39.353 "r_mbytes_per_sec": 0, 00:14:39.353 "w_mbytes_per_sec": 0 00:14:39.353 }, 00:14:39.353 "claimed": false, 00:14:39.353 "zoned": false, 00:14:39.353 "supported_io_types": { 00:14:39.353 "read": true, 00:14:39.353 "write": true, 00:14:39.353 "unmap": true, 00:14:39.353 "flush": true, 00:14:39.353 "reset": true, 00:14:39.353 "nvme_admin": false, 00:14:39.353 "nvme_io": false, 00:14:39.353 "nvme_io_md": false, 00:14:39.353 "write_zeroes": true, 00:14:39.353 "zcopy": true, 00:14:39.353 "get_zone_info": false, 00:14:39.353 "zone_management": false, 00:14:39.353 "zone_append": false, 00:14:39.353 "compare": false, 00:14:39.353 "compare_and_write": false, 00:14:39.353 "abort": true, 00:14:39.353 "seek_hole": false, 00:14:39.353 "seek_data": false, 00:14:39.353 "copy": true, 00:14:39.353 "nvme_iov_md": false 00:14:39.353 }, 00:14:39.353 "memory_domains": [ 00:14:39.353 { 00:14:39.353 "dma_device_id": "system", 00:14:39.353 "dma_device_type": 1 00:14:39.353 }, 00:14:39.353 { 00:14:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.353 "dma_device_type": 2 00:14:39.353 } 00:14:39.353 ], 00:14:39.353 "driver_specific": {} 00:14:39.353 } 00:14:39.353 ] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 [2024-12-05 19:34:32.767104] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.353 [2024-12-05 19:34:32.767199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.353 [2024-12-05 19:34:32.767257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.353 [2024-12-05 19:34:32.769883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.353 [2024-12-05 19:34:32.769976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.353 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.613 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.613 "name": "Existed_Raid", 00:14:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.613 "strip_size_kb": 64, 00:14:39.613 "state": "configuring", 00:14:39.613 "raid_level": "concat", 00:14:39.613 "superblock": false, 00:14:39.613 "num_base_bdevs": 4, 00:14:39.613 "num_base_bdevs_discovered": 3, 00:14:39.613 "num_base_bdevs_operational": 4, 00:14:39.613 "base_bdevs_list": [ 00:14:39.613 { 00:14:39.613 "name": "BaseBdev1", 00:14:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.613 "is_configured": false, 00:14:39.613 "data_offset": 0, 00:14:39.613 "data_size": 0 00:14:39.613 }, 00:14:39.613 { 00:14:39.613 "name": "BaseBdev2", 00:14:39.613 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:39.613 "is_configured": true, 00:14:39.613 "data_offset": 0, 00:14:39.613 "data_size": 65536 00:14:39.613 }, 00:14:39.613 { 00:14:39.613 "name": "BaseBdev3", 00:14:39.613 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:39.613 "is_configured": true, 00:14:39.613 "data_offset": 0, 00:14:39.613 "data_size": 65536 00:14:39.613 }, 00:14:39.613 { 00:14:39.613 "name": "BaseBdev4", 00:14:39.613 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:39.613 "is_configured": true, 00:14:39.613 "data_offset": 0, 00:14:39.613 "data_size": 65536 00:14:39.613 } 00:14:39.613 ] 00:14:39.613 }' 00:14:39.613 19:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.613 19:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.873 [2024-12-05 19:34:33.295344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.873 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.132 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.132 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.132 "name": "Existed_Raid", 00:14:40.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.133 "strip_size_kb": 64, 00:14:40.133 "state": "configuring", 00:14:40.133 "raid_level": "concat", 00:14:40.133 "superblock": false, 00:14:40.133 "num_base_bdevs": 4, 00:14:40.133 "num_base_bdevs_discovered": 2, 00:14:40.133 "num_base_bdevs_operational": 4, 00:14:40.133 "base_bdevs_list": [ 00:14:40.133 { 00:14:40.133 "name": "BaseBdev1", 00:14:40.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.133 "is_configured": false, 00:14:40.133 "data_offset": 0, 00:14:40.133 "data_size": 0 00:14:40.133 }, 00:14:40.133 { 00:14:40.133 "name": null, 00:14:40.133 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:40.133 "is_configured": false, 00:14:40.133 "data_offset": 0, 00:14:40.133 "data_size": 65536 00:14:40.133 }, 00:14:40.133 { 00:14:40.133 "name": "BaseBdev3", 00:14:40.133 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:40.133 "is_configured": true, 00:14:40.133 "data_offset": 0, 00:14:40.133 "data_size": 65536 00:14:40.133 }, 00:14:40.133 { 00:14:40.133 "name": "BaseBdev4", 00:14:40.133 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:40.133 "is_configured": true, 00:14:40.133 "data_offset": 0, 00:14:40.133 "data_size": 65536 00:14:40.133 } 00:14:40.133 ] 00:14:40.133 }' 00:14:40.133 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.133 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.393 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:40.393 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.393 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.393 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.393 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.653 [2024-12-05 19:34:33.895777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.653 BaseBdev1 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.653 [ 00:14:40.653 { 00:14:40.653 "name": "BaseBdev1", 00:14:40.653 "aliases": [ 00:14:40.653 "be5fd404-bc6e-4b7b-8e43-8df1a986865e" 00:14:40.653 ], 00:14:40.653 "product_name": "Malloc disk", 00:14:40.653 "block_size": 512, 00:14:40.653 "num_blocks": 65536, 00:14:40.653 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:40.653 "assigned_rate_limits": { 00:14:40.653 "rw_ios_per_sec": 0, 00:14:40.653 "rw_mbytes_per_sec": 0, 00:14:40.653 "r_mbytes_per_sec": 0, 00:14:40.653 "w_mbytes_per_sec": 0 00:14:40.653 }, 00:14:40.653 "claimed": true, 00:14:40.653 "claim_type": "exclusive_write", 00:14:40.653 "zoned": false, 00:14:40.653 "supported_io_types": { 00:14:40.653 "read": true, 00:14:40.653 "write": true, 00:14:40.653 "unmap": true, 00:14:40.653 "flush": true, 00:14:40.653 "reset": true, 00:14:40.653 "nvme_admin": false, 00:14:40.653 "nvme_io": false, 00:14:40.653 "nvme_io_md": false, 00:14:40.653 "write_zeroes": true, 00:14:40.653 "zcopy": true, 00:14:40.653 "get_zone_info": false, 00:14:40.653 "zone_management": false, 00:14:40.653 "zone_append": false, 00:14:40.653 "compare": false, 00:14:40.653 "compare_and_write": false, 00:14:40.653 "abort": true, 00:14:40.653 "seek_hole": false, 00:14:40.653 "seek_data": false, 00:14:40.653 "copy": true, 00:14:40.653 "nvme_iov_md": false 00:14:40.653 }, 00:14:40.653 "memory_domains": [ 00:14:40.653 { 00:14:40.653 "dma_device_id": "system", 00:14:40.653 "dma_device_type": 1 00:14:40.653 }, 00:14:40.653 { 00:14:40.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.653 "dma_device_type": 2 00:14:40.653 } 00:14:40.653 ], 00:14:40.653 "driver_specific": {} 00:14:40.653 } 00:14:40.653 ] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.653 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.654 "name": "Existed_Raid", 00:14:40.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.654 "strip_size_kb": 64, 00:14:40.654 "state": "configuring", 00:14:40.654 "raid_level": "concat", 00:14:40.654 "superblock": false, 00:14:40.654 "num_base_bdevs": 4, 00:14:40.654 "num_base_bdevs_discovered": 3, 00:14:40.654 "num_base_bdevs_operational": 4, 00:14:40.654 "base_bdevs_list": [ 00:14:40.654 { 00:14:40.654 "name": "BaseBdev1", 00:14:40.654 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:40.654 "is_configured": true, 00:14:40.654 "data_offset": 0, 00:14:40.654 "data_size": 65536 00:14:40.654 }, 00:14:40.654 { 00:14:40.654 "name": null, 00:14:40.654 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:40.654 "is_configured": false, 00:14:40.654 "data_offset": 0, 00:14:40.654 "data_size": 65536 00:14:40.654 }, 00:14:40.654 { 00:14:40.654 "name": "BaseBdev3", 00:14:40.654 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:40.654 "is_configured": true, 00:14:40.654 "data_offset": 0, 00:14:40.654 "data_size": 65536 00:14:40.654 }, 00:14:40.654 { 00:14:40.654 "name": "BaseBdev4", 00:14:40.654 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:40.654 "is_configured": true, 00:14:40.654 "data_offset": 0, 00:14:40.654 "data_size": 65536 00:14:40.654 } 00:14:40.654 ] 00:14:40.654 }' 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.654 19:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.221 [2024-12-05 19:34:34.496073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.221 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.222 "name": "Existed_Raid", 00:14:41.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.222 "strip_size_kb": 64, 00:14:41.222 "state": "configuring", 00:14:41.222 "raid_level": "concat", 00:14:41.222 "superblock": false, 00:14:41.222 "num_base_bdevs": 4, 00:14:41.222 "num_base_bdevs_discovered": 2, 00:14:41.222 "num_base_bdevs_operational": 4, 00:14:41.222 "base_bdevs_list": [ 00:14:41.222 { 00:14:41.222 "name": "BaseBdev1", 00:14:41.222 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:41.222 "is_configured": true, 00:14:41.222 "data_offset": 0, 00:14:41.222 "data_size": 65536 00:14:41.222 }, 00:14:41.222 { 00:14:41.222 "name": null, 00:14:41.222 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:41.222 "is_configured": false, 00:14:41.222 "data_offset": 0, 00:14:41.222 "data_size": 65536 00:14:41.222 }, 00:14:41.222 { 00:14:41.222 "name": null, 00:14:41.222 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:41.222 "is_configured": false, 00:14:41.222 "data_offset": 0, 00:14:41.222 "data_size": 65536 00:14:41.222 }, 00:14:41.222 { 00:14:41.222 "name": "BaseBdev4", 00:14:41.222 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:41.222 "is_configured": true, 00:14:41.222 "data_offset": 0, 00:14:41.222 "data_size": 65536 00:14:41.222 } 00:14:41.222 ] 00:14:41.222 }' 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.222 19:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.790 [2024-12-05 19:34:35.080220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.790 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.790 "name": "Existed_Raid", 00:14:41.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.790 "strip_size_kb": 64, 00:14:41.790 "state": "configuring", 00:14:41.790 "raid_level": "concat", 00:14:41.790 "superblock": false, 00:14:41.790 "num_base_bdevs": 4, 00:14:41.790 "num_base_bdevs_discovered": 3, 00:14:41.790 "num_base_bdevs_operational": 4, 00:14:41.790 "base_bdevs_list": [ 00:14:41.790 { 00:14:41.790 "name": "BaseBdev1", 00:14:41.790 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:41.790 "is_configured": true, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": null, 00:14:41.790 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:41.790 "is_configured": false, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": "BaseBdev3", 00:14:41.790 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:41.790 "is_configured": true, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 }, 00:14:41.790 { 00:14:41.790 "name": "BaseBdev4", 00:14:41.790 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:41.790 "is_configured": true, 00:14:41.790 "data_offset": 0, 00:14:41.790 "data_size": 65536 00:14:41.790 } 00:14:41.790 ] 00:14:41.790 }' 00:14:41.791 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.791 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 [2024-12-05 19:34:35.688503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.620 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.620 "name": "Existed_Raid", 00:14:42.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.620 "strip_size_kb": 64, 00:14:42.620 "state": "configuring", 00:14:42.620 "raid_level": "concat", 00:14:42.620 "superblock": false, 00:14:42.620 "num_base_bdevs": 4, 00:14:42.620 "num_base_bdevs_discovered": 2, 00:14:42.620 "num_base_bdevs_operational": 4, 00:14:42.620 "base_bdevs_list": [ 00:14:42.620 { 00:14:42.620 "name": null, 00:14:42.620 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:42.620 "is_configured": false, 00:14:42.620 "data_offset": 0, 00:14:42.620 "data_size": 65536 00:14:42.620 }, 00:14:42.620 { 00:14:42.620 "name": null, 00:14:42.620 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:42.620 "is_configured": false, 00:14:42.620 "data_offset": 0, 00:14:42.620 "data_size": 65536 00:14:42.620 }, 00:14:42.620 { 00:14:42.620 "name": "BaseBdev3", 00:14:42.620 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:42.620 "is_configured": true, 00:14:42.620 "data_offset": 0, 00:14:42.620 "data_size": 65536 00:14:42.620 }, 00:14:42.620 { 00:14:42.620 "name": "BaseBdev4", 00:14:42.620 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:42.620 "is_configured": true, 00:14:42.620 "data_offset": 0, 00:14:42.620 "data_size": 65536 00:14:42.620 } 00:14:42.620 ] 00:14:42.620 }' 00:14:42.620 19:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.620 19:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.881 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.881 [2024-12-05 19:34:36.318076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.139 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.139 "name": "Existed_Raid", 00:14:43.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.139 "strip_size_kb": 64, 00:14:43.139 "state": "configuring", 00:14:43.140 "raid_level": "concat", 00:14:43.140 "superblock": false, 00:14:43.140 "num_base_bdevs": 4, 00:14:43.140 "num_base_bdevs_discovered": 3, 00:14:43.140 "num_base_bdevs_operational": 4, 00:14:43.140 "base_bdevs_list": [ 00:14:43.140 { 00:14:43.140 "name": null, 00:14:43.140 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:43.140 "is_configured": false, 00:14:43.140 "data_offset": 0, 00:14:43.140 "data_size": 65536 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": "BaseBdev2", 00:14:43.140 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 0, 00:14:43.140 "data_size": 65536 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": "BaseBdev3", 00:14:43.140 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 0, 00:14:43.140 "data_size": 65536 00:14:43.140 }, 00:14:43.140 { 00:14:43.140 "name": "BaseBdev4", 00:14:43.140 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:43.140 "is_configured": true, 00:14:43.140 "data_offset": 0, 00:14:43.140 "data_size": 65536 00:14:43.140 } 00:14:43.140 ] 00:14:43.140 }' 00:14:43.140 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.140 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.400 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:43.400 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.400 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.400 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be5fd404-bc6e-4b7b-8e43-8df1a986865e 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 [2024-12-05 19:34:36.980310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:43.660 NewBaseBdev 00:14:43.660 [2024-12-05 19:34:36.980654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:43.660 [2024-12-05 19:34:36.980678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:43.660 [2024-12-05 19:34:36.981077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:43.660 [2024-12-05 19:34:36.981332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:43.660 [2024-12-05 19:34:36.981352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:43.660 [2024-12-05 19:34:36.981673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.660 19:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.660 [ 00:14:43.660 { 00:14:43.660 "name": "NewBaseBdev", 00:14:43.660 "aliases": [ 00:14:43.660 "be5fd404-bc6e-4b7b-8e43-8df1a986865e" 00:14:43.660 ], 00:14:43.660 "product_name": "Malloc disk", 00:14:43.660 "block_size": 512, 00:14:43.660 "num_blocks": 65536, 00:14:43.660 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:43.660 "assigned_rate_limits": { 00:14:43.660 "rw_ios_per_sec": 0, 00:14:43.660 "rw_mbytes_per_sec": 0, 00:14:43.660 "r_mbytes_per_sec": 0, 00:14:43.660 "w_mbytes_per_sec": 0 00:14:43.660 }, 00:14:43.660 "claimed": true, 00:14:43.660 "claim_type": "exclusive_write", 00:14:43.660 "zoned": false, 00:14:43.660 "supported_io_types": { 00:14:43.660 "read": true, 00:14:43.660 "write": true, 00:14:43.660 "unmap": true, 00:14:43.660 "flush": true, 00:14:43.660 "reset": true, 00:14:43.660 "nvme_admin": false, 00:14:43.660 "nvme_io": false, 00:14:43.660 "nvme_io_md": false, 00:14:43.660 "write_zeroes": true, 00:14:43.660 "zcopy": true, 00:14:43.660 "get_zone_info": false, 00:14:43.660 "zone_management": false, 00:14:43.660 "zone_append": false, 00:14:43.660 "compare": false, 00:14:43.660 "compare_and_write": false, 00:14:43.660 "abort": true, 00:14:43.660 "seek_hole": false, 00:14:43.660 "seek_data": false, 00:14:43.660 "copy": true, 00:14:43.660 "nvme_iov_md": false 00:14:43.660 }, 00:14:43.660 "memory_domains": [ 00:14:43.660 { 00:14:43.660 "dma_device_id": "system", 00:14:43.660 "dma_device_type": 1 00:14:43.660 }, 00:14:43.660 { 00:14:43.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.660 "dma_device_type": 2 00:14:43.660 } 00:14:43.660 ], 00:14:43.660 "driver_specific": {} 00:14:43.660 } 00:14:43.660 ] 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.660 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.661 "name": "Existed_Raid", 00:14:43.661 "uuid": "38e6fc79-eedb-4e55-bacb-a439212a40be", 00:14:43.661 "strip_size_kb": 64, 00:14:43.661 "state": "online", 00:14:43.661 "raid_level": "concat", 00:14:43.661 "superblock": false, 00:14:43.661 "num_base_bdevs": 4, 00:14:43.661 "num_base_bdevs_discovered": 4, 00:14:43.661 "num_base_bdevs_operational": 4, 00:14:43.661 "base_bdevs_list": [ 00:14:43.661 { 00:14:43.661 "name": "NewBaseBdev", 00:14:43.661 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:43.661 "is_configured": true, 00:14:43.661 "data_offset": 0, 00:14:43.661 "data_size": 65536 00:14:43.661 }, 00:14:43.661 { 00:14:43.661 "name": "BaseBdev2", 00:14:43.661 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:43.661 "is_configured": true, 00:14:43.661 "data_offset": 0, 00:14:43.661 "data_size": 65536 00:14:43.661 }, 00:14:43.661 { 00:14:43.661 "name": "BaseBdev3", 00:14:43.661 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:43.661 "is_configured": true, 00:14:43.661 "data_offset": 0, 00:14:43.661 "data_size": 65536 00:14:43.661 }, 00:14:43.661 { 00:14:43.661 "name": "BaseBdev4", 00:14:43.661 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:43.661 "is_configured": true, 00:14:43.661 "data_offset": 0, 00:14:43.661 "data_size": 65536 00:14:43.661 } 00:14:43.661 ] 00:14:43.661 }' 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.661 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.229 [2024-12-05 19:34:37.536964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.229 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.229 "name": "Existed_Raid", 00:14:44.229 "aliases": [ 00:14:44.229 "38e6fc79-eedb-4e55-bacb-a439212a40be" 00:14:44.229 ], 00:14:44.229 "product_name": "Raid Volume", 00:14:44.229 "block_size": 512, 00:14:44.229 "num_blocks": 262144, 00:14:44.229 "uuid": "38e6fc79-eedb-4e55-bacb-a439212a40be", 00:14:44.229 "assigned_rate_limits": { 00:14:44.229 "rw_ios_per_sec": 0, 00:14:44.229 "rw_mbytes_per_sec": 0, 00:14:44.229 "r_mbytes_per_sec": 0, 00:14:44.229 "w_mbytes_per_sec": 0 00:14:44.229 }, 00:14:44.229 "claimed": false, 00:14:44.229 "zoned": false, 00:14:44.229 "supported_io_types": { 00:14:44.229 "read": true, 00:14:44.229 "write": true, 00:14:44.229 "unmap": true, 00:14:44.229 "flush": true, 00:14:44.229 "reset": true, 00:14:44.229 "nvme_admin": false, 00:14:44.229 "nvme_io": false, 00:14:44.229 "nvme_io_md": false, 00:14:44.229 "write_zeroes": true, 00:14:44.229 "zcopy": false, 00:14:44.229 "get_zone_info": false, 00:14:44.229 "zone_management": false, 00:14:44.229 "zone_append": false, 00:14:44.229 "compare": false, 00:14:44.229 "compare_and_write": false, 00:14:44.229 "abort": false, 00:14:44.229 "seek_hole": false, 00:14:44.229 "seek_data": false, 00:14:44.229 "copy": false, 00:14:44.229 "nvme_iov_md": false 00:14:44.229 }, 00:14:44.229 "memory_domains": [ 00:14:44.229 { 00:14:44.229 "dma_device_id": "system", 00:14:44.229 "dma_device_type": 1 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.229 "dma_device_type": 2 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "system", 00:14:44.229 "dma_device_type": 1 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.229 "dma_device_type": 2 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "system", 00:14:44.229 "dma_device_type": 1 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.229 "dma_device_type": 2 00:14:44.229 }, 00:14:44.229 { 00:14:44.229 "dma_device_id": "system", 00:14:44.229 "dma_device_type": 1 00:14:44.229 }, 00:14:44.229 { 00:14:44.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.230 "dma_device_type": 2 00:14:44.230 } 00:14:44.230 ], 00:14:44.230 "driver_specific": { 00:14:44.230 "raid": { 00:14:44.230 "uuid": "38e6fc79-eedb-4e55-bacb-a439212a40be", 00:14:44.230 "strip_size_kb": 64, 00:14:44.230 "state": "online", 00:14:44.230 "raid_level": "concat", 00:14:44.230 "superblock": false, 00:14:44.230 "num_base_bdevs": 4, 00:14:44.230 "num_base_bdevs_discovered": 4, 00:14:44.230 "num_base_bdevs_operational": 4, 00:14:44.230 "base_bdevs_list": [ 00:14:44.230 { 00:14:44.230 "name": "NewBaseBdev", 00:14:44.230 "uuid": "be5fd404-bc6e-4b7b-8e43-8df1a986865e", 00:14:44.230 "is_configured": true, 00:14:44.230 "data_offset": 0, 00:14:44.230 "data_size": 65536 00:14:44.230 }, 00:14:44.230 { 00:14:44.230 "name": "BaseBdev2", 00:14:44.230 "uuid": "05622ecb-45f7-4530-aeeb-f84ec5adab94", 00:14:44.230 "is_configured": true, 00:14:44.230 "data_offset": 0, 00:14:44.230 "data_size": 65536 00:14:44.230 }, 00:14:44.230 { 00:14:44.230 "name": "BaseBdev3", 00:14:44.230 "uuid": "89f4d7c9-3a49-418f-8ba2-37c19b7a6b4a", 00:14:44.230 "is_configured": true, 00:14:44.230 "data_offset": 0, 00:14:44.230 "data_size": 65536 00:14:44.230 }, 00:14:44.230 { 00:14:44.230 "name": "BaseBdev4", 00:14:44.230 "uuid": "4e1d6974-4c1c-46c2-bf0b-e6e291bc3096", 00:14:44.230 "is_configured": true, 00:14:44.230 "data_offset": 0, 00:14:44.230 "data_size": 65536 00:14:44.230 } 00:14:44.230 ] 00:14:44.230 } 00:14:44.230 } 00:14:44.230 }' 00:14:44.230 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.230 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:44.230 BaseBdev2 00:14:44.230 BaseBdev3 00:14:44.230 BaseBdev4' 00:14:44.230 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.489 [2024-12-05 19:34:37.916559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.489 [2024-12-05 19:34:37.916770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.489 [2024-12-05 19:34:37.916887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.489 [2024-12-05 19:34:37.916983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.489 [2024-12-05 19:34:37.917001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71386 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71386 ']' 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71386 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:44.489 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71386 00:14:44.748 killing process with pid 71386 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71386' 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71386 00:14:44.748 [2024-12-05 19:34:37.956294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.748 19:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71386 00:14:45.006 [2024-12-05 19:34:38.300043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.940 19:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:45.940 00:14:45.940 real 0m12.964s 00:14:45.940 user 0m21.578s 00:14:45.940 sys 0m1.767s 00:14:45.940 ************************************ 00:14:45.940 END TEST raid_state_function_test 00:14:45.940 ************************************ 00:14:45.940 19:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.940 19:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.200 19:34:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:46.200 19:34:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:46.200 19:34:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.200 19:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.200 ************************************ 00:14:46.200 START TEST raid_state_function_test_sb 00:14:46.200 ************************************ 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.200 Process raid pid: 72063 00:14:46.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72063 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72063' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72063 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72063 ']' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.200 19:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.200 [2024-12-05 19:34:39.520173] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:46.200 [2024-12-05 19:34:39.520374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.459 [2024-12-05 19:34:39.705208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.459 [2024-12-05 19:34:39.865555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.718 [2024-12-05 19:34:40.094868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.718 [2024-12-05 19:34:40.094919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.285 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.285 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:47.285 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.286 [2024-12-05 19:34:40.529158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.286 [2024-12-05 19:34:40.529451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.286 [2024-12-05 19:34:40.529482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.286 [2024-12-05 19:34:40.529503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.286 [2024-12-05 19:34:40.529515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.286 [2024-12-05 19:34:40.529531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.286 [2024-12-05 19:34:40.529541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.286 [2024-12-05 19:34:40.529557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.286 "name": "Existed_Raid", 00:14:47.286 "uuid": "af0ecb03-4dc7-4c1e-b0b5-dfa11d3e579a", 00:14:47.286 "strip_size_kb": 64, 00:14:47.286 "state": "configuring", 00:14:47.286 "raid_level": "concat", 00:14:47.286 "superblock": true, 00:14:47.286 "num_base_bdevs": 4, 00:14:47.286 "num_base_bdevs_discovered": 0, 00:14:47.286 "num_base_bdevs_operational": 4, 00:14:47.286 "base_bdevs_list": [ 00:14:47.286 { 00:14:47.286 "name": "BaseBdev1", 00:14:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.286 "is_configured": false, 00:14:47.286 "data_offset": 0, 00:14:47.286 "data_size": 0 00:14:47.286 }, 00:14:47.286 { 00:14:47.286 "name": "BaseBdev2", 00:14:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.286 "is_configured": false, 00:14:47.286 "data_offset": 0, 00:14:47.286 "data_size": 0 00:14:47.286 }, 00:14:47.286 { 00:14:47.286 "name": "BaseBdev3", 00:14:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.286 "is_configured": false, 00:14:47.286 "data_offset": 0, 00:14:47.286 "data_size": 0 00:14:47.286 }, 00:14:47.286 { 00:14:47.286 "name": "BaseBdev4", 00:14:47.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.286 "is_configured": false, 00:14:47.286 "data_offset": 0, 00:14:47.286 "data_size": 0 00:14:47.286 } 00:14:47.286 ] 00:14:47.286 }' 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.286 19:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 [2024-12-05 19:34:41.041302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.854 [2024-12-05 19:34:41.041351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 [2024-12-05 19:34:41.049282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.854 [2024-12-05 19:34:41.049512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.854 [2024-12-05 19:34:41.049651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.854 [2024-12-05 19:34:41.049687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.854 [2024-12-05 19:34:41.049715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.854 [2024-12-05 19:34:41.049735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.854 [2024-12-05 19:34:41.049757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.854 [2024-12-05 19:34:41.049773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 [2024-12-05 19:34:41.095378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.854 BaseBdev1 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 [ 00:14:47.854 { 00:14:47.854 "name": "BaseBdev1", 00:14:47.854 "aliases": [ 00:14:47.854 "624bb2ad-5e20-4173-a86c-6967ed068f45" 00:14:47.854 ], 00:14:47.854 "product_name": "Malloc disk", 00:14:47.854 "block_size": 512, 00:14:47.854 "num_blocks": 65536, 00:14:47.854 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:47.854 "assigned_rate_limits": { 00:14:47.854 "rw_ios_per_sec": 0, 00:14:47.854 "rw_mbytes_per_sec": 0, 00:14:47.854 "r_mbytes_per_sec": 0, 00:14:47.854 "w_mbytes_per_sec": 0 00:14:47.854 }, 00:14:47.854 "claimed": true, 00:14:47.854 "claim_type": "exclusive_write", 00:14:47.854 "zoned": false, 00:14:47.854 "supported_io_types": { 00:14:47.854 "read": true, 00:14:47.854 "write": true, 00:14:47.854 "unmap": true, 00:14:47.854 "flush": true, 00:14:47.854 "reset": true, 00:14:47.854 "nvme_admin": false, 00:14:47.854 "nvme_io": false, 00:14:47.854 "nvme_io_md": false, 00:14:47.854 "write_zeroes": true, 00:14:47.854 "zcopy": true, 00:14:47.854 "get_zone_info": false, 00:14:47.854 "zone_management": false, 00:14:47.854 "zone_append": false, 00:14:47.854 "compare": false, 00:14:47.854 "compare_and_write": false, 00:14:47.854 "abort": true, 00:14:47.854 "seek_hole": false, 00:14:47.854 "seek_data": false, 00:14:47.854 "copy": true, 00:14:47.854 "nvme_iov_md": false 00:14:47.854 }, 00:14:47.854 "memory_domains": [ 00:14:47.854 { 00:14:47.854 "dma_device_id": "system", 00:14:47.854 "dma_device_type": 1 00:14:47.854 }, 00:14:47.854 { 00:14:47.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.854 "dma_device_type": 2 00:14:47.854 } 00:14:47.854 ], 00:14:47.854 "driver_specific": {} 00:14:47.854 } 00:14:47.854 ] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.854 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.854 "name": "Existed_Raid", 00:14:47.854 "uuid": "c807af9d-f9d5-43ac-b29f-7d0eb621be27", 00:14:47.854 "strip_size_kb": 64, 00:14:47.854 "state": "configuring", 00:14:47.854 "raid_level": "concat", 00:14:47.854 "superblock": true, 00:14:47.854 "num_base_bdevs": 4, 00:14:47.854 "num_base_bdevs_discovered": 1, 00:14:47.854 "num_base_bdevs_operational": 4, 00:14:47.854 "base_bdevs_list": [ 00:14:47.854 { 00:14:47.854 "name": "BaseBdev1", 00:14:47.854 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:47.854 "is_configured": true, 00:14:47.854 "data_offset": 2048, 00:14:47.854 "data_size": 63488 00:14:47.854 }, 00:14:47.854 { 00:14:47.854 "name": "BaseBdev2", 00:14:47.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.854 "is_configured": false, 00:14:47.854 "data_offset": 0, 00:14:47.854 "data_size": 0 00:14:47.854 }, 00:14:47.854 { 00:14:47.854 "name": "BaseBdev3", 00:14:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.855 "is_configured": false, 00:14:47.855 "data_offset": 0, 00:14:47.855 "data_size": 0 00:14:47.855 }, 00:14:47.855 { 00:14:47.855 "name": "BaseBdev4", 00:14:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.855 "is_configured": false, 00:14:47.855 "data_offset": 0, 00:14:47.855 "data_size": 0 00:14:47.855 } 00:14:47.855 ] 00:14:47.855 }' 00:14:47.855 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.855 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 [2024-12-05 19:34:41.651582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.419 [2024-12-05 19:34:41.651644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 [2024-12-05 19:34:41.659639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.419 [2024-12-05 19:34:41.662123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.419 [2024-12-05 19:34:41.662183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.419 [2024-12-05 19:34:41.662202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.419 [2024-12-05 19:34:41.662222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.419 [2024-12-05 19:34:41.662233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.419 [2024-12-05 19:34:41.662249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.419 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.420 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.420 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.420 "name": "Existed_Raid", 00:14:48.420 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:48.420 "strip_size_kb": 64, 00:14:48.420 "state": "configuring", 00:14:48.420 "raid_level": "concat", 00:14:48.420 "superblock": true, 00:14:48.420 "num_base_bdevs": 4, 00:14:48.420 "num_base_bdevs_discovered": 1, 00:14:48.420 "num_base_bdevs_operational": 4, 00:14:48.420 "base_bdevs_list": [ 00:14:48.420 { 00:14:48.420 "name": "BaseBdev1", 00:14:48.420 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:48.420 "is_configured": true, 00:14:48.420 "data_offset": 2048, 00:14:48.420 "data_size": 63488 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": "BaseBdev2", 00:14:48.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.420 "is_configured": false, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 0 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": "BaseBdev3", 00:14:48.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.420 "is_configured": false, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 0 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": "BaseBdev4", 00:14:48.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.420 "is_configured": false, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 0 00:14:48.420 } 00:14:48.420 ] 00:14:48.420 }' 00:14:48.420 19:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.420 19:34:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 [2024-12-05 19:34:42.223629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.987 BaseBdev2 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.987 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.987 [ 00:14:48.987 { 00:14:48.987 "name": "BaseBdev2", 00:14:48.987 "aliases": [ 00:14:48.987 "21761e1b-b4ed-46cb-8f26-3d633c581a8b" 00:14:48.987 ], 00:14:48.987 "product_name": "Malloc disk", 00:14:48.987 "block_size": 512, 00:14:48.987 "num_blocks": 65536, 00:14:48.987 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:48.987 "assigned_rate_limits": { 00:14:48.987 "rw_ios_per_sec": 0, 00:14:48.987 "rw_mbytes_per_sec": 0, 00:14:48.987 "r_mbytes_per_sec": 0, 00:14:48.987 "w_mbytes_per_sec": 0 00:14:48.987 }, 00:14:48.987 "claimed": true, 00:14:48.987 "claim_type": "exclusive_write", 00:14:48.987 "zoned": false, 00:14:48.987 "supported_io_types": { 00:14:48.987 "read": true, 00:14:48.987 "write": true, 00:14:48.987 "unmap": true, 00:14:48.987 "flush": true, 00:14:48.987 "reset": true, 00:14:48.987 "nvme_admin": false, 00:14:48.987 "nvme_io": false, 00:14:48.987 "nvme_io_md": false, 00:14:48.987 "write_zeroes": true, 00:14:48.987 "zcopy": true, 00:14:48.987 "get_zone_info": false, 00:14:48.987 "zone_management": false, 00:14:48.987 "zone_append": false, 00:14:48.987 "compare": false, 00:14:48.987 "compare_and_write": false, 00:14:48.988 "abort": true, 00:14:48.988 "seek_hole": false, 00:14:48.988 "seek_data": false, 00:14:48.988 "copy": true, 00:14:48.988 "nvme_iov_md": false 00:14:48.988 }, 00:14:48.988 "memory_domains": [ 00:14:48.988 { 00:14:48.988 "dma_device_id": "system", 00:14:48.988 "dma_device_type": 1 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.988 "dma_device_type": 2 00:14:48.988 } 00:14:48.988 ], 00:14:48.988 "driver_specific": {} 00:14:48.988 } 00:14:48.988 ] 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.988 "name": "Existed_Raid", 00:14:48.988 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:48.988 "strip_size_kb": 64, 00:14:48.988 "state": "configuring", 00:14:48.988 "raid_level": "concat", 00:14:48.988 "superblock": true, 00:14:48.988 "num_base_bdevs": 4, 00:14:48.988 "num_base_bdevs_discovered": 2, 00:14:48.988 "num_base_bdevs_operational": 4, 00:14:48.988 "base_bdevs_list": [ 00:14:48.988 { 00:14:48.988 "name": "BaseBdev1", 00:14:48.988 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:48.988 "is_configured": true, 00:14:48.988 "data_offset": 2048, 00:14:48.988 "data_size": 63488 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": "BaseBdev2", 00:14:48.988 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:48.988 "is_configured": true, 00:14:48.988 "data_offset": 2048, 00:14:48.988 "data_size": 63488 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": "BaseBdev3", 00:14:48.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.988 "is_configured": false, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 0 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": "BaseBdev4", 00:14:48.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.988 "is_configured": false, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 0 00:14:48.988 } 00:14:48.988 ] 00:14:48.988 }' 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.988 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 [2024-12-05 19:34:42.818170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.564 BaseBdev3 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 [ 00:14:49.564 { 00:14:49.564 "name": "BaseBdev3", 00:14:49.564 "aliases": [ 00:14:49.564 "145f0ab0-f73c-42cb-8a13-bb2778c6fc37" 00:14:49.564 ], 00:14:49.564 "product_name": "Malloc disk", 00:14:49.564 "block_size": 512, 00:14:49.564 "num_blocks": 65536, 00:14:49.564 "uuid": "145f0ab0-f73c-42cb-8a13-bb2778c6fc37", 00:14:49.564 "assigned_rate_limits": { 00:14:49.564 "rw_ios_per_sec": 0, 00:14:49.564 "rw_mbytes_per_sec": 0, 00:14:49.564 "r_mbytes_per_sec": 0, 00:14:49.564 "w_mbytes_per_sec": 0 00:14:49.564 }, 00:14:49.564 "claimed": true, 00:14:49.564 "claim_type": "exclusive_write", 00:14:49.564 "zoned": false, 00:14:49.564 "supported_io_types": { 00:14:49.564 "read": true, 00:14:49.564 "write": true, 00:14:49.564 "unmap": true, 00:14:49.564 "flush": true, 00:14:49.564 "reset": true, 00:14:49.564 "nvme_admin": false, 00:14:49.564 "nvme_io": false, 00:14:49.564 "nvme_io_md": false, 00:14:49.564 "write_zeroes": true, 00:14:49.564 "zcopy": true, 00:14:49.564 "get_zone_info": false, 00:14:49.564 "zone_management": false, 00:14:49.564 "zone_append": false, 00:14:49.564 "compare": false, 00:14:49.564 "compare_and_write": false, 00:14:49.564 "abort": true, 00:14:49.564 "seek_hole": false, 00:14:49.564 "seek_data": false, 00:14:49.564 "copy": true, 00:14:49.564 "nvme_iov_md": false 00:14:49.564 }, 00:14:49.564 "memory_domains": [ 00:14:49.564 { 00:14:49.564 "dma_device_id": "system", 00:14:49.564 "dma_device_type": 1 00:14:49.564 }, 00:14:49.564 { 00:14:49.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.564 "dma_device_type": 2 00:14:49.564 } 00:14:49.564 ], 00:14:49.564 "driver_specific": {} 00:14:49.564 } 00:14:49.564 ] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.564 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.564 "name": "Existed_Raid", 00:14:49.564 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:49.564 "strip_size_kb": 64, 00:14:49.564 "state": "configuring", 00:14:49.564 "raid_level": "concat", 00:14:49.564 "superblock": true, 00:14:49.564 "num_base_bdevs": 4, 00:14:49.565 "num_base_bdevs_discovered": 3, 00:14:49.565 "num_base_bdevs_operational": 4, 00:14:49.565 "base_bdevs_list": [ 00:14:49.565 { 00:14:49.565 "name": "BaseBdev1", 00:14:49.565 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:49.565 "is_configured": true, 00:14:49.565 "data_offset": 2048, 00:14:49.565 "data_size": 63488 00:14:49.565 }, 00:14:49.565 { 00:14:49.565 "name": "BaseBdev2", 00:14:49.565 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:49.565 "is_configured": true, 00:14:49.565 "data_offset": 2048, 00:14:49.565 "data_size": 63488 00:14:49.565 }, 00:14:49.565 { 00:14:49.565 "name": "BaseBdev3", 00:14:49.565 "uuid": "145f0ab0-f73c-42cb-8a13-bb2778c6fc37", 00:14:49.565 "is_configured": true, 00:14:49.565 "data_offset": 2048, 00:14:49.565 "data_size": 63488 00:14:49.565 }, 00:14:49.565 { 00:14:49.565 "name": "BaseBdev4", 00:14:49.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.565 "is_configured": false, 00:14:49.565 "data_offset": 0, 00:14:49.565 "data_size": 0 00:14:49.565 } 00:14:49.565 ] 00:14:49.565 }' 00:14:49.565 19:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.565 19:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 [2024-12-05 19:34:43.458119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.132 [2024-12-05 19:34:43.458432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:50.132 [2024-12-05 19:34:43.458467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:50.132 [2024-12-05 19:34:43.458843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:50.132 [2024-12-05 19:34:43.459039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:50.132 [2024-12-05 19:34:43.459061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:50.132 [2024-12-05 19:34:43.459235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.132 BaseBdev4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 [ 00:14:50.132 { 00:14:50.132 "name": "BaseBdev4", 00:14:50.132 "aliases": [ 00:14:50.132 "cf6e70b9-42ab-449a-b945-a12310e5686f" 00:14:50.132 ], 00:14:50.132 "product_name": "Malloc disk", 00:14:50.132 "block_size": 512, 00:14:50.132 "num_blocks": 65536, 00:14:50.132 "uuid": "cf6e70b9-42ab-449a-b945-a12310e5686f", 00:14:50.132 "assigned_rate_limits": { 00:14:50.132 "rw_ios_per_sec": 0, 00:14:50.132 "rw_mbytes_per_sec": 0, 00:14:50.132 "r_mbytes_per_sec": 0, 00:14:50.132 "w_mbytes_per_sec": 0 00:14:50.132 }, 00:14:50.132 "claimed": true, 00:14:50.132 "claim_type": "exclusive_write", 00:14:50.132 "zoned": false, 00:14:50.132 "supported_io_types": { 00:14:50.132 "read": true, 00:14:50.132 "write": true, 00:14:50.132 "unmap": true, 00:14:50.132 "flush": true, 00:14:50.132 "reset": true, 00:14:50.132 "nvme_admin": false, 00:14:50.132 "nvme_io": false, 00:14:50.132 "nvme_io_md": false, 00:14:50.132 "write_zeroes": true, 00:14:50.132 "zcopy": true, 00:14:50.132 "get_zone_info": false, 00:14:50.132 "zone_management": false, 00:14:50.132 "zone_append": false, 00:14:50.132 "compare": false, 00:14:50.132 "compare_and_write": false, 00:14:50.132 "abort": true, 00:14:50.132 "seek_hole": false, 00:14:50.132 "seek_data": false, 00:14:50.132 "copy": true, 00:14:50.132 "nvme_iov_md": false 00:14:50.132 }, 00:14:50.132 "memory_domains": [ 00:14:50.132 { 00:14:50.132 "dma_device_id": "system", 00:14:50.132 "dma_device_type": 1 00:14:50.132 }, 00:14:50.132 { 00:14:50.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.132 "dma_device_type": 2 00:14:50.132 } 00:14:50.132 ], 00:14:50.132 "driver_specific": {} 00:14:50.132 } 00:14:50.132 ] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.132 "name": "Existed_Raid", 00:14:50.132 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:50.132 "strip_size_kb": 64, 00:14:50.132 "state": "online", 00:14:50.132 "raid_level": "concat", 00:14:50.132 "superblock": true, 00:14:50.132 "num_base_bdevs": 4, 00:14:50.132 "num_base_bdevs_discovered": 4, 00:14:50.132 "num_base_bdevs_operational": 4, 00:14:50.132 "base_bdevs_list": [ 00:14:50.132 { 00:14:50.132 "name": "BaseBdev1", 00:14:50.132 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:50.132 "is_configured": true, 00:14:50.132 "data_offset": 2048, 00:14:50.132 "data_size": 63488 00:14:50.132 }, 00:14:50.132 { 00:14:50.132 "name": "BaseBdev2", 00:14:50.132 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:50.132 "is_configured": true, 00:14:50.132 "data_offset": 2048, 00:14:50.132 "data_size": 63488 00:14:50.132 }, 00:14:50.132 { 00:14:50.132 "name": "BaseBdev3", 00:14:50.132 "uuid": "145f0ab0-f73c-42cb-8a13-bb2778c6fc37", 00:14:50.132 "is_configured": true, 00:14:50.132 "data_offset": 2048, 00:14:50.132 "data_size": 63488 00:14:50.132 }, 00:14:50.132 { 00:14:50.132 "name": "BaseBdev4", 00:14:50.132 "uuid": "cf6e70b9-42ab-449a-b945-a12310e5686f", 00:14:50.132 "is_configured": true, 00:14:50.132 "data_offset": 2048, 00:14:50.132 "data_size": 63488 00:14:50.132 } 00:14:50.132 ] 00:14:50.132 }' 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.132 19:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.699 [2024-12-05 19:34:44.014872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.699 "name": "Existed_Raid", 00:14:50.699 "aliases": [ 00:14:50.699 "027e1f7e-291d-465f-92ba-825959ee0ca5" 00:14:50.699 ], 00:14:50.699 "product_name": "Raid Volume", 00:14:50.699 "block_size": 512, 00:14:50.699 "num_blocks": 253952, 00:14:50.699 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:50.699 "assigned_rate_limits": { 00:14:50.699 "rw_ios_per_sec": 0, 00:14:50.699 "rw_mbytes_per_sec": 0, 00:14:50.699 "r_mbytes_per_sec": 0, 00:14:50.699 "w_mbytes_per_sec": 0 00:14:50.699 }, 00:14:50.699 "claimed": false, 00:14:50.699 "zoned": false, 00:14:50.699 "supported_io_types": { 00:14:50.699 "read": true, 00:14:50.699 "write": true, 00:14:50.699 "unmap": true, 00:14:50.699 "flush": true, 00:14:50.699 "reset": true, 00:14:50.699 "nvme_admin": false, 00:14:50.699 "nvme_io": false, 00:14:50.699 "nvme_io_md": false, 00:14:50.699 "write_zeroes": true, 00:14:50.699 "zcopy": false, 00:14:50.699 "get_zone_info": false, 00:14:50.699 "zone_management": false, 00:14:50.699 "zone_append": false, 00:14:50.699 "compare": false, 00:14:50.699 "compare_and_write": false, 00:14:50.699 "abort": false, 00:14:50.699 "seek_hole": false, 00:14:50.699 "seek_data": false, 00:14:50.699 "copy": false, 00:14:50.699 "nvme_iov_md": false 00:14:50.699 }, 00:14:50.699 "memory_domains": [ 00:14:50.699 { 00:14:50.699 "dma_device_id": "system", 00:14:50.699 "dma_device_type": 1 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.699 "dma_device_type": 2 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "system", 00:14:50.699 "dma_device_type": 1 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.699 "dma_device_type": 2 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "system", 00:14:50.699 "dma_device_type": 1 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.699 "dma_device_type": 2 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "system", 00:14:50.699 "dma_device_type": 1 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.699 "dma_device_type": 2 00:14:50.699 } 00:14:50.699 ], 00:14:50.699 "driver_specific": { 00:14:50.699 "raid": { 00:14:50.699 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:50.699 "strip_size_kb": 64, 00:14:50.699 "state": "online", 00:14:50.699 "raid_level": "concat", 00:14:50.699 "superblock": true, 00:14:50.699 "num_base_bdevs": 4, 00:14:50.699 "num_base_bdevs_discovered": 4, 00:14:50.699 "num_base_bdevs_operational": 4, 00:14:50.699 "base_bdevs_list": [ 00:14:50.699 { 00:14:50.699 "name": "BaseBdev1", 00:14:50.699 "uuid": "624bb2ad-5e20-4173-a86c-6967ed068f45", 00:14:50.699 "is_configured": true, 00:14:50.699 "data_offset": 2048, 00:14:50.699 "data_size": 63488 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "name": "BaseBdev2", 00:14:50.699 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:50.699 "is_configured": true, 00:14:50.699 "data_offset": 2048, 00:14:50.699 "data_size": 63488 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "name": "BaseBdev3", 00:14:50.699 "uuid": "145f0ab0-f73c-42cb-8a13-bb2778c6fc37", 00:14:50.699 "is_configured": true, 00:14:50.699 "data_offset": 2048, 00:14:50.699 "data_size": 63488 00:14:50.699 }, 00:14:50.699 { 00:14:50.699 "name": "BaseBdev4", 00:14:50.699 "uuid": "cf6e70b9-42ab-449a-b945-a12310e5686f", 00:14:50.699 "is_configured": true, 00:14:50.699 "data_offset": 2048, 00:14:50.699 "data_size": 63488 00:14:50.699 } 00:14:50.699 ] 00:14:50.699 } 00:14:50.699 } 00:14:50.699 }' 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.699 BaseBdev2 00:14:50.699 BaseBdev3 00:14:50.699 BaseBdev4' 00:14:50.699 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:50.958 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.959 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.959 [2024-12-05 19:34:44.378581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.959 [2024-12-05 19:34:44.378641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.959 [2024-12-05 19:34:44.378707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.217 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.217 "name": "Existed_Raid", 00:14:51.217 "uuid": "027e1f7e-291d-465f-92ba-825959ee0ca5", 00:14:51.217 "strip_size_kb": 64, 00:14:51.217 "state": "offline", 00:14:51.217 "raid_level": "concat", 00:14:51.217 "superblock": true, 00:14:51.217 "num_base_bdevs": 4, 00:14:51.217 "num_base_bdevs_discovered": 3, 00:14:51.217 "num_base_bdevs_operational": 3, 00:14:51.217 "base_bdevs_list": [ 00:14:51.217 { 00:14:51.217 "name": null, 00:14:51.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.217 "is_configured": false, 00:14:51.217 "data_offset": 0, 00:14:51.217 "data_size": 63488 00:14:51.217 }, 00:14:51.217 { 00:14:51.217 "name": "BaseBdev2", 00:14:51.217 "uuid": "21761e1b-b4ed-46cb-8f26-3d633c581a8b", 00:14:51.217 "is_configured": true, 00:14:51.217 "data_offset": 2048, 00:14:51.217 "data_size": 63488 00:14:51.217 }, 00:14:51.217 { 00:14:51.217 "name": "BaseBdev3", 00:14:51.217 "uuid": "145f0ab0-f73c-42cb-8a13-bb2778c6fc37", 00:14:51.217 "is_configured": true, 00:14:51.217 "data_offset": 2048, 00:14:51.217 "data_size": 63488 00:14:51.217 }, 00:14:51.217 { 00:14:51.217 "name": "BaseBdev4", 00:14:51.217 "uuid": "cf6e70b9-42ab-449a-b945-a12310e5686f", 00:14:51.217 "is_configured": true, 00:14:51.218 "data_offset": 2048, 00:14:51.218 "data_size": 63488 00:14:51.218 } 00:14:51.218 ] 00:14:51.218 }' 00:14:51.218 19:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.218 19:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.786 [2024-12-05 19:34:45.056198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.786 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.786 [2024-12-05 19:34:45.202897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.045 [2024-12-05 19:34:45.353049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:52.045 [2024-12-05 19:34:45.353150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.045 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 BaseBdev2 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 [ 00:14:52.305 { 00:14:52.305 "name": "BaseBdev2", 00:14:52.305 "aliases": [ 00:14:52.305 "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f" 00:14:52.305 ], 00:14:52.305 "product_name": "Malloc disk", 00:14:52.305 "block_size": 512, 00:14:52.305 "num_blocks": 65536, 00:14:52.305 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:52.305 "assigned_rate_limits": { 00:14:52.305 "rw_ios_per_sec": 0, 00:14:52.305 "rw_mbytes_per_sec": 0, 00:14:52.305 "r_mbytes_per_sec": 0, 00:14:52.305 "w_mbytes_per_sec": 0 00:14:52.305 }, 00:14:52.305 "claimed": false, 00:14:52.305 "zoned": false, 00:14:52.305 "supported_io_types": { 00:14:52.305 "read": true, 00:14:52.305 "write": true, 00:14:52.305 "unmap": true, 00:14:52.305 "flush": true, 00:14:52.305 "reset": true, 00:14:52.305 "nvme_admin": false, 00:14:52.305 "nvme_io": false, 00:14:52.305 "nvme_io_md": false, 00:14:52.305 "write_zeroes": true, 00:14:52.305 "zcopy": true, 00:14:52.305 "get_zone_info": false, 00:14:52.305 "zone_management": false, 00:14:52.305 "zone_append": false, 00:14:52.305 "compare": false, 00:14:52.305 "compare_and_write": false, 00:14:52.305 "abort": true, 00:14:52.305 "seek_hole": false, 00:14:52.305 "seek_data": false, 00:14:52.305 "copy": true, 00:14:52.305 "nvme_iov_md": false 00:14:52.305 }, 00:14:52.305 "memory_domains": [ 00:14:52.305 { 00:14:52.305 "dma_device_id": "system", 00:14:52.305 "dma_device_type": 1 00:14:52.305 }, 00:14:52.305 { 00:14:52.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.305 "dma_device_type": 2 00:14:52.305 } 00:14:52.305 ], 00:14:52.305 "driver_specific": {} 00:14:52.305 } 00:14:52.305 ] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 BaseBdev3 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 [ 00:14:52.305 { 00:14:52.305 "name": "BaseBdev3", 00:14:52.305 "aliases": [ 00:14:52.305 "19de3d95-0b33-4c94-8987-d9a6b5db386a" 00:14:52.305 ], 00:14:52.305 "product_name": "Malloc disk", 00:14:52.305 "block_size": 512, 00:14:52.305 "num_blocks": 65536, 00:14:52.305 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:52.305 "assigned_rate_limits": { 00:14:52.305 "rw_ios_per_sec": 0, 00:14:52.305 "rw_mbytes_per_sec": 0, 00:14:52.305 "r_mbytes_per_sec": 0, 00:14:52.305 "w_mbytes_per_sec": 0 00:14:52.305 }, 00:14:52.305 "claimed": false, 00:14:52.305 "zoned": false, 00:14:52.305 "supported_io_types": { 00:14:52.305 "read": true, 00:14:52.305 "write": true, 00:14:52.305 "unmap": true, 00:14:52.305 "flush": true, 00:14:52.305 "reset": true, 00:14:52.305 "nvme_admin": false, 00:14:52.305 "nvme_io": false, 00:14:52.305 "nvme_io_md": false, 00:14:52.305 "write_zeroes": true, 00:14:52.305 "zcopy": true, 00:14:52.305 "get_zone_info": false, 00:14:52.305 "zone_management": false, 00:14:52.305 "zone_append": false, 00:14:52.305 "compare": false, 00:14:52.305 "compare_and_write": false, 00:14:52.305 "abort": true, 00:14:52.305 "seek_hole": false, 00:14:52.305 "seek_data": false, 00:14:52.305 "copy": true, 00:14:52.305 "nvme_iov_md": false 00:14:52.305 }, 00:14:52.305 "memory_domains": [ 00:14:52.305 { 00:14:52.305 "dma_device_id": "system", 00:14:52.305 "dma_device_type": 1 00:14:52.305 }, 00:14:52.305 { 00:14:52.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.305 "dma_device_type": 2 00:14:52.305 } 00:14:52.305 ], 00:14:52.305 "driver_specific": {} 00:14:52.305 } 00:14:52.305 ] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 BaseBdev4 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.305 [ 00:14:52.305 { 00:14:52.305 "name": "BaseBdev4", 00:14:52.305 "aliases": [ 00:14:52.305 "2fb95a06-9861-432f-8f3b-d51ccd98fbe6" 00:14:52.305 ], 00:14:52.305 "product_name": "Malloc disk", 00:14:52.305 "block_size": 512, 00:14:52.305 "num_blocks": 65536, 00:14:52.305 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:52.305 "assigned_rate_limits": { 00:14:52.305 "rw_ios_per_sec": 0, 00:14:52.305 "rw_mbytes_per_sec": 0, 00:14:52.305 "r_mbytes_per_sec": 0, 00:14:52.305 "w_mbytes_per_sec": 0 00:14:52.305 }, 00:14:52.305 "claimed": false, 00:14:52.305 "zoned": false, 00:14:52.305 "supported_io_types": { 00:14:52.305 "read": true, 00:14:52.305 "write": true, 00:14:52.305 "unmap": true, 00:14:52.305 "flush": true, 00:14:52.305 "reset": true, 00:14:52.305 "nvme_admin": false, 00:14:52.305 "nvme_io": false, 00:14:52.305 "nvme_io_md": false, 00:14:52.305 "write_zeroes": true, 00:14:52.305 "zcopy": true, 00:14:52.305 "get_zone_info": false, 00:14:52.305 "zone_management": false, 00:14:52.305 "zone_append": false, 00:14:52.305 "compare": false, 00:14:52.305 "compare_and_write": false, 00:14:52.305 "abort": true, 00:14:52.305 "seek_hole": false, 00:14:52.305 "seek_data": false, 00:14:52.305 "copy": true, 00:14:52.305 "nvme_iov_md": false 00:14:52.305 }, 00:14:52.305 "memory_domains": [ 00:14:52.305 { 00:14:52.305 "dma_device_id": "system", 00:14:52.305 "dma_device_type": 1 00:14:52.305 }, 00:14:52.305 { 00:14:52.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.305 "dma_device_type": 2 00:14:52.305 } 00:14:52.305 ], 00:14:52.305 "driver_specific": {} 00:14:52.305 } 00:14:52.305 ] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.305 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.306 [2024-12-05 19:34:45.722728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.306 [2024-12-05 19:34:45.722790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.306 [2024-12-05 19:34:45.722824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.306 [2024-12-05 19:34:45.725310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.306 [2024-12-05 19:34:45.725405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.306 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.564 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.564 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.564 "name": "Existed_Raid", 00:14:52.564 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:52.564 "strip_size_kb": 64, 00:14:52.564 "state": "configuring", 00:14:52.564 "raid_level": "concat", 00:14:52.564 "superblock": true, 00:14:52.564 "num_base_bdevs": 4, 00:14:52.564 "num_base_bdevs_discovered": 3, 00:14:52.564 "num_base_bdevs_operational": 4, 00:14:52.564 "base_bdevs_list": [ 00:14:52.564 { 00:14:52.564 "name": "BaseBdev1", 00:14:52.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.564 "is_configured": false, 00:14:52.564 "data_offset": 0, 00:14:52.564 "data_size": 0 00:14:52.564 }, 00:14:52.564 { 00:14:52.564 "name": "BaseBdev2", 00:14:52.564 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:52.564 "is_configured": true, 00:14:52.564 "data_offset": 2048, 00:14:52.564 "data_size": 63488 00:14:52.564 }, 00:14:52.564 { 00:14:52.564 "name": "BaseBdev3", 00:14:52.564 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:52.564 "is_configured": true, 00:14:52.564 "data_offset": 2048, 00:14:52.564 "data_size": 63488 00:14:52.564 }, 00:14:52.564 { 00:14:52.564 "name": "BaseBdev4", 00:14:52.564 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:52.564 "is_configured": true, 00:14:52.564 "data_offset": 2048, 00:14:52.564 "data_size": 63488 00:14:52.564 } 00:14:52.564 ] 00:14:52.564 }' 00:14:52.564 19:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.564 19:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.823 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:52.823 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.823 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.823 [2024-12-05 19:34:46.258987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.081 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.082 "name": "Existed_Raid", 00:14:53.082 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:53.082 "strip_size_kb": 64, 00:14:53.082 "state": "configuring", 00:14:53.082 "raid_level": "concat", 00:14:53.082 "superblock": true, 00:14:53.082 "num_base_bdevs": 4, 00:14:53.082 "num_base_bdevs_discovered": 2, 00:14:53.082 "num_base_bdevs_operational": 4, 00:14:53.082 "base_bdevs_list": [ 00:14:53.082 { 00:14:53.082 "name": "BaseBdev1", 00:14:53.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.082 "is_configured": false, 00:14:53.082 "data_offset": 0, 00:14:53.082 "data_size": 0 00:14:53.082 }, 00:14:53.082 { 00:14:53.082 "name": null, 00:14:53.082 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:53.082 "is_configured": false, 00:14:53.082 "data_offset": 0, 00:14:53.082 "data_size": 63488 00:14:53.082 }, 00:14:53.082 { 00:14:53.082 "name": "BaseBdev3", 00:14:53.082 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:53.082 "is_configured": true, 00:14:53.082 "data_offset": 2048, 00:14:53.082 "data_size": 63488 00:14:53.082 }, 00:14:53.082 { 00:14:53.082 "name": "BaseBdev4", 00:14:53.082 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:53.082 "is_configured": true, 00:14:53.082 "data_offset": 2048, 00:14:53.082 "data_size": 63488 00:14:53.082 } 00:14:53.082 ] 00:14:53.082 }' 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.082 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.649 [2024-12-05 19:34:46.909347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.649 BaseBdev1 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:53.649 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.650 [ 00:14:53.650 { 00:14:53.650 "name": "BaseBdev1", 00:14:53.650 "aliases": [ 00:14:53.650 "800f6c6c-152b-4aad-8ac7-ca35fcd03838" 00:14:53.650 ], 00:14:53.650 "product_name": "Malloc disk", 00:14:53.650 "block_size": 512, 00:14:53.650 "num_blocks": 65536, 00:14:53.650 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:53.650 "assigned_rate_limits": { 00:14:53.650 "rw_ios_per_sec": 0, 00:14:53.650 "rw_mbytes_per_sec": 0, 00:14:53.650 "r_mbytes_per_sec": 0, 00:14:53.650 "w_mbytes_per_sec": 0 00:14:53.650 }, 00:14:53.650 "claimed": true, 00:14:53.650 "claim_type": "exclusive_write", 00:14:53.650 "zoned": false, 00:14:53.650 "supported_io_types": { 00:14:53.650 "read": true, 00:14:53.650 "write": true, 00:14:53.650 "unmap": true, 00:14:53.650 "flush": true, 00:14:53.650 "reset": true, 00:14:53.650 "nvme_admin": false, 00:14:53.650 "nvme_io": false, 00:14:53.650 "nvme_io_md": false, 00:14:53.650 "write_zeroes": true, 00:14:53.650 "zcopy": true, 00:14:53.650 "get_zone_info": false, 00:14:53.650 "zone_management": false, 00:14:53.650 "zone_append": false, 00:14:53.650 "compare": false, 00:14:53.650 "compare_and_write": false, 00:14:53.650 "abort": true, 00:14:53.650 "seek_hole": false, 00:14:53.650 "seek_data": false, 00:14:53.650 "copy": true, 00:14:53.650 "nvme_iov_md": false 00:14:53.650 }, 00:14:53.650 "memory_domains": [ 00:14:53.650 { 00:14:53.650 "dma_device_id": "system", 00:14:53.650 "dma_device_type": 1 00:14:53.650 }, 00:14:53.650 { 00:14:53.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.650 "dma_device_type": 2 00:14:53.650 } 00:14:53.650 ], 00:14:53.650 "driver_specific": {} 00:14:53.650 } 00:14:53.650 ] 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.650 "name": "Existed_Raid", 00:14:53.650 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:53.650 "strip_size_kb": 64, 00:14:53.650 "state": "configuring", 00:14:53.650 "raid_level": "concat", 00:14:53.650 "superblock": true, 00:14:53.650 "num_base_bdevs": 4, 00:14:53.650 "num_base_bdevs_discovered": 3, 00:14:53.650 "num_base_bdevs_operational": 4, 00:14:53.650 "base_bdevs_list": [ 00:14:53.650 { 00:14:53.650 "name": "BaseBdev1", 00:14:53.650 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:53.650 "is_configured": true, 00:14:53.650 "data_offset": 2048, 00:14:53.650 "data_size": 63488 00:14:53.650 }, 00:14:53.650 { 00:14:53.650 "name": null, 00:14:53.650 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:53.650 "is_configured": false, 00:14:53.650 "data_offset": 0, 00:14:53.650 "data_size": 63488 00:14:53.650 }, 00:14:53.650 { 00:14:53.650 "name": "BaseBdev3", 00:14:53.650 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:53.650 "is_configured": true, 00:14:53.650 "data_offset": 2048, 00:14:53.650 "data_size": 63488 00:14:53.650 }, 00:14:53.650 { 00:14:53.650 "name": "BaseBdev4", 00:14:53.650 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:53.650 "is_configured": true, 00:14:53.650 "data_offset": 2048, 00:14:53.650 "data_size": 63488 00:14:53.650 } 00:14:53.650 ] 00:14:53.650 }' 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.650 19:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.218 [2024-12-05 19:34:47.521692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.218 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.218 "name": "Existed_Raid", 00:14:54.218 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:54.218 "strip_size_kb": 64, 00:14:54.218 "state": "configuring", 00:14:54.218 "raid_level": "concat", 00:14:54.218 "superblock": true, 00:14:54.218 "num_base_bdevs": 4, 00:14:54.218 "num_base_bdevs_discovered": 2, 00:14:54.218 "num_base_bdevs_operational": 4, 00:14:54.218 "base_bdevs_list": [ 00:14:54.218 { 00:14:54.218 "name": "BaseBdev1", 00:14:54.218 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:54.218 "is_configured": true, 00:14:54.218 "data_offset": 2048, 00:14:54.218 "data_size": 63488 00:14:54.218 }, 00:14:54.218 { 00:14:54.218 "name": null, 00:14:54.218 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:54.218 "is_configured": false, 00:14:54.218 "data_offset": 0, 00:14:54.218 "data_size": 63488 00:14:54.219 }, 00:14:54.219 { 00:14:54.219 "name": null, 00:14:54.219 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:54.219 "is_configured": false, 00:14:54.219 "data_offset": 0, 00:14:54.219 "data_size": 63488 00:14:54.219 }, 00:14:54.219 { 00:14:54.219 "name": "BaseBdev4", 00:14:54.219 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:54.219 "is_configured": true, 00:14:54.219 "data_offset": 2048, 00:14:54.219 "data_size": 63488 00:14:54.219 } 00:14:54.219 ] 00:14:54.219 }' 00:14:54.219 19:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.219 19:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.787 [2024-12-05 19:34:48.101808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.787 "name": "Existed_Raid", 00:14:54.787 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:54.787 "strip_size_kb": 64, 00:14:54.787 "state": "configuring", 00:14:54.787 "raid_level": "concat", 00:14:54.787 "superblock": true, 00:14:54.787 "num_base_bdevs": 4, 00:14:54.787 "num_base_bdevs_discovered": 3, 00:14:54.787 "num_base_bdevs_operational": 4, 00:14:54.787 "base_bdevs_list": [ 00:14:54.787 { 00:14:54.787 "name": "BaseBdev1", 00:14:54.787 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:54.787 "is_configured": true, 00:14:54.787 "data_offset": 2048, 00:14:54.787 "data_size": 63488 00:14:54.787 }, 00:14:54.787 { 00:14:54.787 "name": null, 00:14:54.787 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:54.787 "is_configured": false, 00:14:54.787 "data_offset": 0, 00:14:54.787 "data_size": 63488 00:14:54.787 }, 00:14:54.787 { 00:14:54.787 "name": "BaseBdev3", 00:14:54.787 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:54.787 "is_configured": true, 00:14:54.787 "data_offset": 2048, 00:14:54.787 "data_size": 63488 00:14:54.787 }, 00:14:54.787 { 00:14:54.787 "name": "BaseBdev4", 00:14:54.787 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:54.787 "is_configured": true, 00:14:54.787 "data_offset": 2048, 00:14:54.787 "data_size": 63488 00:14:54.787 } 00:14:54.787 ] 00:14:54.787 }' 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.787 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.416 [2024-12-05 19:34:48.682016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.416 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.417 "name": "Existed_Raid", 00:14:55.417 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:55.417 "strip_size_kb": 64, 00:14:55.417 "state": "configuring", 00:14:55.417 "raid_level": "concat", 00:14:55.417 "superblock": true, 00:14:55.417 "num_base_bdevs": 4, 00:14:55.417 "num_base_bdevs_discovered": 2, 00:14:55.417 "num_base_bdevs_operational": 4, 00:14:55.417 "base_bdevs_list": [ 00:14:55.417 { 00:14:55.417 "name": null, 00:14:55.417 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:55.417 "is_configured": false, 00:14:55.417 "data_offset": 0, 00:14:55.417 "data_size": 63488 00:14:55.417 }, 00:14:55.417 { 00:14:55.417 "name": null, 00:14:55.417 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:55.417 "is_configured": false, 00:14:55.417 "data_offset": 0, 00:14:55.417 "data_size": 63488 00:14:55.417 }, 00:14:55.417 { 00:14:55.417 "name": "BaseBdev3", 00:14:55.417 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:55.417 "is_configured": true, 00:14:55.417 "data_offset": 2048, 00:14:55.417 "data_size": 63488 00:14:55.417 }, 00:14:55.417 { 00:14:55.417 "name": "BaseBdev4", 00:14:55.417 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:55.417 "is_configured": true, 00:14:55.417 "data_offset": 2048, 00:14:55.417 "data_size": 63488 00:14:55.417 } 00:14:55.417 ] 00:14:55.417 }' 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.417 19:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.998 [2024-12-05 19:34:49.324449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.998 "name": "Existed_Raid", 00:14:55.998 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:55.998 "strip_size_kb": 64, 00:14:55.998 "state": "configuring", 00:14:55.998 "raid_level": "concat", 00:14:55.998 "superblock": true, 00:14:55.998 "num_base_bdevs": 4, 00:14:55.998 "num_base_bdevs_discovered": 3, 00:14:55.998 "num_base_bdevs_operational": 4, 00:14:55.998 "base_bdevs_list": [ 00:14:55.998 { 00:14:55.998 "name": null, 00:14:55.998 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:55.998 "is_configured": false, 00:14:55.998 "data_offset": 0, 00:14:55.998 "data_size": 63488 00:14:55.998 }, 00:14:55.998 { 00:14:55.998 "name": "BaseBdev2", 00:14:55.998 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:55.998 "is_configured": true, 00:14:55.998 "data_offset": 2048, 00:14:55.998 "data_size": 63488 00:14:55.998 }, 00:14:55.998 { 00:14:55.998 "name": "BaseBdev3", 00:14:55.998 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:55.998 "is_configured": true, 00:14:55.998 "data_offset": 2048, 00:14:55.998 "data_size": 63488 00:14:55.998 }, 00:14:55.998 { 00:14:55.998 "name": "BaseBdev4", 00:14:55.998 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:55.998 "is_configured": true, 00:14:55.998 "data_offset": 2048, 00:14:55.998 "data_size": 63488 00:14:55.998 } 00:14:55.998 ] 00:14:55.998 }' 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.998 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 800f6c6c-152b-4aad-8ac7-ca35fcd03838 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 [2024-12-05 19:34:49.980509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:56.566 [2024-12-05 19:34:49.980829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:56.566 [2024-12-05 19:34:49.980848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:56.566 [2024-12-05 19:34:49.981220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:56.566 [2024-12-05 19:34:49.981401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:56.566 [2024-12-05 19:34:49.981422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:56.566 [2024-12-05 19:34:49.981576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.566 NewBaseBdev 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.566 19:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 [ 00:14:56.566 { 00:14:56.566 "name": "NewBaseBdev", 00:14:56.566 "aliases": [ 00:14:56.566 "800f6c6c-152b-4aad-8ac7-ca35fcd03838" 00:14:56.566 ], 00:14:56.566 "product_name": "Malloc disk", 00:14:56.566 "block_size": 512, 00:14:56.566 "num_blocks": 65536, 00:14:56.566 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:56.566 "assigned_rate_limits": { 00:14:56.566 "rw_ios_per_sec": 0, 00:14:56.566 "rw_mbytes_per_sec": 0, 00:14:56.566 "r_mbytes_per_sec": 0, 00:14:56.566 "w_mbytes_per_sec": 0 00:14:56.566 }, 00:14:56.566 "claimed": true, 00:14:56.566 "claim_type": "exclusive_write", 00:14:56.566 "zoned": false, 00:14:56.566 "supported_io_types": { 00:14:56.566 "read": true, 00:14:56.566 "write": true, 00:14:56.566 "unmap": true, 00:14:56.566 "flush": true, 00:14:56.566 "reset": true, 00:14:56.566 "nvme_admin": false, 00:14:56.566 "nvme_io": false, 00:14:56.566 "nvme_io_md": false, 00:14:56.825 "write_zeroes": true, 00:14:56.825 "zcopy": true, 00:14:56.825 "get_zone_info": false, 00:14:56.825 "zone_management": false, 00:14:56.825 "zone_append": false, 00:14:56.825 "compare": false, 00:14:56.825 "compare_and_write": false, 00:14:56.825 "abort": true, 00:14:56.825 "seek_hole": false, 00:14:56.825 "seek_data": false, 00:14:56.825 "copy": true, 00:14:56.825 "nvme_iov_md": false 00:14:56.825 }, 00:14:56.825 "memory_domains": [ 00:14:56.825 { 00:14:56.825 "dma_device_id": "system", 00:14:56.825 "dma_device_type": 1 00:14:56.825 }, 00:14:56.825 { 00:14:56.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.825 "dma_device_type": 2 00:14:56.825 } 00:14:56.825 ], 00:14:56.825 "driver_specific": {} 00:14:56.825 } 00:14:56.825 ] 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.825 "name": "Existed_Raid", 00:14:56.825 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:56.825 "strip_size_kb": 64, 00:14:56.825 "state": "online", 00:14:56.825 "raid_level": "concat", 00:14:56.825 "superblock": true, 00:14:56.825 "num_base_bdevs": 4, 00:14:56.825 "num_base_bdevs_discovered": 4, 00:14:56.825 "num_base_bdevs_operational": 4, 00:14:56.825 "base_bdevs_list": [ 00:14:56.825 { 00:14:56.825 "name": "NewBaseBdev", 00:14:56.825 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:56.825 "is_configured": true, 00:14:56.825 "data_offset": 2048, 00:14:56.825 "data_size": 63488 00:14:56.825 }, 00:14:56.825 { 00:14:56.825 "name": "BaseBdev2", 00:14:56.825 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:56.825 "is_configured": true, 00:14:56.825 "data_offset": 2048, 00:14:56.825 "data_size": 63488 00:14:56.825 }, 00:14:56.825 { 00:14:56.825 "name": "BaseBdev3", 00:14:56.825 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:56.825 "is_configured": true, 00:14:56.825 "data_offset": 2048, 00:14:56.825 "data_size": 63488 00:14:56.825 }, 00:14:56.825 { 00:14:56.825 "name": "BaseBdev4", 00:14:56.825 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:56.825 "is_configured": true, 00:14:56.825 "data_offset": 2048, 00:14:56.825 "data_size": 63488 00:14:56.825 } 00:14:56.825 ] 00:14:56.825 }' 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.825 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.391 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.392 [2024-12-05 19:34:50.545189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.392 "name": "Existed_Raid", 00:14:57.392 "aliases": [ 00:14:57.392 "bee56831-406d-4ec7-a263-775b84f4707a" 00:14:57.392 ], 00:14:57.392 "product_name": "Raid Volume", 00:14:57.392 "block_size": 512, 00:14:57.392 "num_blocks": 253952, 00:14:57.392 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:57.392 "assigned_rate_limits": { 00:14:57.392 "rw_ios_per_sec": 0, 00:14:57.392 "rw_mbytes_per_sec": 0, 00:14:57.392 "r_mbytes_per_sec": 0, 00:14:57.392 "w_mbytes_per_sec": 0 00:14:57.392 }, 00:14:57.392 "claimed": false, 00:14:57.392 "zoned": false, 00:14:57.392 "supported_io_types": { 00:14:57.392 "read": true, 00:14:57.392 "write": true, 00:14:57.392 "unmap": true, 00:14:57.392 "flush": true, 00:14:57.392 "reset": true, 00:14:57.392 "nvme_admin": false, 00:14:57.392 "nvme_io": false, 00:14:57.392 "nvme_io_md": false, 00:14:57.392 "write_zeroes": true, 00:14:57.392 "zcopy": false, 00:14:57.392 "get_zone_info": false, 00:14:57.392 "zone_management": false, 00:14:57.392 "zone_append": false, 00:14:57.392 "compare": false, 00:14:57.392 "compare_and_write": false, 00:14:57.392 "abort": false, 00:14:57.392 "seek_hole": false, 00:14:57.392 "seek_data": false, 00:14:57.392 "copy": false, 00:14:57.392 "nvme_iov_md": false 00:14:57.392 }, 00:14:57.392 "memory_domains": [ 00:14:57.392 { 00:14:57.392 "dma_device_id": "system", 00:14:57.392 "dma_device_type": 1 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.392 "dma_device_type": 2 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "system", 00:14:57.392 "dma_device_type": 1 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.392 "dma_device_type": 2 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "system", 00:14:57.392 "dma_device_type": 1 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.392 "dma_device_type": 2 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "system", 00:14:57.392 "dma_device_type": 1 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.392 "dma_device_type": 2 00:14:57.392 } 00:14:57.392 ], 00:14:57.392 "driver_specific": { 00:14:57.392 "raid": { 00:14:57.392 "uuid": "bee56831-406d-4ec7-a263-775b84f4707a", 00:14:57.392 "strip_size_kb": 64, 00:14:57.392 "state": "online", 00:14:57.392 "raid_level": "concat", 00:14:57.392 "superblock": true, 00:14:57.392 "num_base_bdevs": 4, 00:14:57.392 "num_base_bdevs_discovered": 4, 00:14:57.392 "num_base_bdevs_operational": 4, 00:14:57.392 "base_bdevs_list": [ 00:14:57.392 { 00:14:57.392 "name": "NewBaseBdev", 00:14:57.392 "uuid": "800f6c6c-152b-4aad-8ac7-ca35fcd03838", 00:14:57.392 "is_configured": true, 00:14:57.392 "data_offset": 2048, 00:14:57.392 "data_size": 63488 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "name": "BaseBdev2", 00:14:57.392 "uuid": "d097a4a1-7813-4d15-a5d5-3a0e63c62f8f", 00:14:57.392 "is_configured": true, 00:14:57.392 "data_offset": 2048, 00:14:57.392 "data_size": 63488 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "name": "BaseBdev3", 00:14:57.392 "uuid": "19de3d95-0b33-4c94-8987-d9a6b5db386a", 00:14:57.392 "is_configured": true, 00:14:57.392 "data_offset": 2048, 00:14:57.392 "data_size": 63488 00:14:57.392 }, 00:14:57.392 { 00:14:57.392 "name": "BaseBdev4", 00:14:57.392 "uuid": "2fb95a06-9861-432f-8f3b-d51ccd98fbe6", 00:14:57.392 "is_configured": true, 00:14:57.392 "data_offset": 2048, 00:14:57.392 "data_size": 63488 00:14:57.392 } 00:14:57.392 ] 00:14:57.392 } 00:14:57.392 } 00:14:57.392 }' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:57.392 BaseBdev2 00:14:57.392 BaseBdev3 00:14:57.392 BaseBdev4' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.392 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.393 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.393 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.393 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.393 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.651 [2024-12-05 19:34:50.908812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.651 [2024-12-05 19:34:50.908866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.651 [2024-12-05 19:34:50.908962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.651 [2024-12-05 19:34:50.909072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.651 [2024-12-05 19:34:50.909097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72063 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72063 ']' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72063 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72063 00:14:57.651 killing process with pid 72063 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72063' 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72063 00:14:57.651 [2024-12-05 19:34:50.946261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.651 19:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72063 00:14:57.909 [2024-12-05 19:34:51.292598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.283 19:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:59.283 00:14:59.283 real 0m12.972s 00:14:59.283 user 0m21.513s 00:14:59.283 sys 0m1.812s 00:14:59.283 19:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.283 19:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 ************************************ 00:14:59.284 END TEST raid_state_function_test_sb 00:14:59.284 ************************************ 00:14:59.284 19:34:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:59.284 19:34:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:59.284 19:34:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.284 19:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 ************************************ 00:14:59.284 START TEST raid_superblock_test 00:14:59.284 ************************************ 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72750 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72750 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72750 ']' 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.284 19:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.284 [2024-12-05 19:34:52.570424] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:14:59.284 [2024-12-05 19:34:52.570604] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72750 ] 00:14:59.544 [2024-12-05 19:34:52.756043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.544 [2024-12-05 19:34:52.885385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.802 [2024-12-05 19:34:53.094503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.802 [2024-12-05 19:34:53.094546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.369 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 malloc1 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-05 19:34:53.548394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.370 [2024-12-05 19:34:53.548492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.370 [2024-12-05 19:34:53.548522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.370 [2024-12-05 19:34:53.548536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.370 [2024-12-05 19:34:53.551389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.370 [2024-12-05 19:34:53.551429] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.370 pt1 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 malloc2 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-05 19:34:53.599757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.370 [2024-12-05 19:34:53.599833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.370 [2024-12-05 19:34:53.599869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.370 [2024-12-05 19:34:53.599885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.370 [2024-12-05 19:34:53.602509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.370 [2024-12-05 19:34:53.602550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.370 pt2 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 malloc3 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-05 19:34:53.660285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:00.370 [2024-12-05 19:34:53.660354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.370 [2024-12-05 19:34:53.660395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.370 [2024-12-05 19:34:53.660410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.370 [2024-12-05 19:34:53.663382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.370 [2024-12-05 19:34:53.663425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:00.370 pt3 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 malloc4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-05 19:34:53.712245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:00.370 [2024-12-05 19:34:53.712328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.370 [2024-12-05 19:34:53.712358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:00.370 [2024-12-05 19:34:53.712372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.370 [2024-12-05 19:34:53.715091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.370 [2024-12-05 19:34:53.715133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:00.370 pt4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 [2024-12-05 19:34:53.720263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.370 [2024-12-05 19:34:53.722519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.370 [2024-12-05 19:34:53.722638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.370 [2024-12-05 19:34:53.722731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:00.370 [2024-12-05 19:34:53.722965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:00.370 [2024-12-05 19:34:53.722990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:00.370 [2024-12-05 19:34:53.723314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:00.370 [2024-12-05 19:34:53.723534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:00.370 [2024-12-05 19:34:53.723562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:00.370 [2024-12-05 19:34:53.723798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.371 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.371 "name": "raid_bdev1", 00:15:00.371 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:00.371 "strip_size_kb": 64, 00:15:00.371 "state": "online", 00:15:00.371 "raid_level": "concat", 00:15:00.371 "superblock": true, 00:15:00.371 "num_base_bdevs": 4, 00:15:00.371 "num_base_bdevs_discovered": 4, 00:15:00.371 "num_base_bdevs_operational": 4, 00:15:00.371 "base_bdevs_list": [ 00:15:00.371 { 00:15:00.371 "name": "pt1", 00:15:00.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.371 "is_configured": true, 00:15:00.371 "data_offset": 2048, 00:15:00.371 "data_size": 63488 00:15:00.371 }, 00:15:00.371 { 00:15:00.371 "name": "pt2", 00:15:00.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.371 "is_configured": true, 00:15:00.371 "data_offset": 2048, 00:15:00.371 "data_size": 63488 00:15:00.371 }, 00:15:00.371 { 00:15:00.371 "name": "pt3", 00:15:00.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.371 "is_configured": true, 00:15:00.371 "data_offset": 2048, 00:15:00.371 "data_size": 63488 00:15:00.371 }, 00:15:00.371 { 00:15:00.371 "name": "pt4", 00:15:00.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.371 "is_configured": true, 00:15:00.371 "data_offset": 2048, 00:15:00.371 "data_size": 63488 00:15:00.371 } 00:15:00.371 ] 00:15:00.371 }' 00:15:00.371 19:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.371 19:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.938 [2024-12-05 19:34:54.228923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.938 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.938 "name": "raid_bdev1", 00:15:00.938 "aliases": [ 00:15:00.938 "17355001-450b-403b-b8b1-d8de6a025423" 00:15:00.938 ], 00:15:00.938 "product_name": "Raid Volume", 00:15:00.938 "block_size": 512, 00:15:00.938 "num_blocks": 253952, 00:15:00.938 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:00.938 "assigned_rate_limits": { 00:15:00.938 "rw_ios_per_sec": 0, 00:15:00.938 "rw_mbytes_per_sec": 0, 00:15:00.938 "r_mbytes_per_sec": 0, 00:15:00.938 "w_mbytes_per_sec": 0 00:15:00.938 }, 00:15:00.938 "claimed": false, 00:15:00.938 "zoned": false, 00:15:00.938 "supported_io_types": { 00:15:00.938 "read": true, 00:15:00.938 "write": true, 00:15:00.938 "unmap": true, 00:15:00.938 "flush": true, 00:15:00.938 "reset": true, 00:15:00.938 "nvme_admin": false, 00:15:00.938 "nvme_io": false, 00:15:00.938 "nvme_io_md": false, 00:15:00.938 "write_zeroes": true, 00:15:00.938 "zcopy": false, 00:15:00.938 "get_zone_info": false, 00:15:00.938 "zone_management": false, 00:15:00.938 "zone_append": false, 00:15:00.938 "compare": false, 00:15:00.938 "compare_and_write": false, 00:15:00.938 "abort": false, 00:15:00.938 "seek_hole": false, 00:15:00.938 "seek_data": false, 00:15:00.939 "copy": false, 00:15:00.939 "nvme_iov_md": false 00:15:00.939 }, 00:15:00.939 "memory_domains": [ 00:15:00.939 { 00:15:00.939 "dma_device_id": "system", 00:15:00.939 "dma_device_type": 1 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.939 "dma_device_type": 2 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "system", 00:15:00.939 "dma_device_type": 1 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.939 "dma_device_type": 2 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "system", 00:15:00.939 "dma_device_type": 1 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.939 "dma_device_type": 2 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "system", 00:15:00.939 "dma_device_type": 1 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.939 "dma_device_type": 2 00:15:00.939 } 00:15:00.939 ], 00:15:00.939 "driver_specific": { 00:15:00.939 "raid": { 00:15:00.939 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:00.939 "strip_size_kb": 64, 00:15:00.939 "state": "online", 00:15:00.939 "raid_level": "concat", 00:15:00.939 "superblock": true, 00:15:00.939 "num_base_bdevs": 4, 00:15:00.939 "num_base_bdevs_discovered": 4, 00:15:00.939 "num_base_bdevs_operational": 4, 00:15:00.939 "base_bdevs_list": [ 00:15:00.939 { 00:15:00.939 "name": "pt1", 00:15:00.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.939 "is_configured": true, 00:15:00.939 "data_offset": 2048, 00:15:00.939 "data_size": 63488 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "name": "pt2", 00:15:00.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.939 "is_configured": true, 00:15:00.939 "data_offset": 2048, 00:15:00.939 "data_size": 63488 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "name": "pt3", 00:15:00.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.939 "is_configured": true, 00:15:00.939 "data_offset": 2048, 00:15:00.939 "data_size": 63488 00:15:00.939 }, 00:15:00.939 { 00:15:00.939 "name": "pt4", 00:15:00.939 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.939 "is_configured": true, 00:15:00.939 "data_offset": 2048, 00:15:00.939 "data_size": 63488 00:15:00.939 } 00:15:00.939 ] 00:15:00.939 } 00:15:00.939 } 00:15:00.939 }' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.939 pt2 00:15:00.939 pt3 00:15:00.939 pt4' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.939 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:01.199 [2024-12-05 19:34:54.584921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17355001-450b-403b-b8b1-d8de6a025423 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 17355001-450b-403b-b8b1-d8de6a025423 ']' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.199 [2024-12-05 19:34:54.628560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.199 [2024-12-05 19:34:54.628613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.199 [2024-12-05 19:34:54.628815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.199 [2024-12-05 19:34:54.628954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.199 [2024-12-05 19:34:54.628991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.199 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 [2024-12-05 19:34:54.780700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:01.501 [2024-12-05 19:34:54.784092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:01.501 [2024-12-05 19:34:54.784189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:01.501 [2024-12-05 19:34:54.784267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:01.501 [2024-12-05 19:34:54.784368] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:01.501 [2024-12-05 19:34:54.784466] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:01.501 [2024-12-05 19:34:54.784514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:01.501 [2024-12-05 19:34:54.784591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:01.501 [2024-12-05 19:34:54.784630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.501 [2024-12-05 19:34:54.784656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:01.501 request: 00:15:01.501 { 00:15:01.501 "name": "raid_bdev1", 00:15:01.501 "raid_level": "concat", 00:15:01.501 "base_bdevs": [ 00:15:01.501 "malloc1", 00:15:01.501 "malloc2", 00:15:01.501 "malloc3", 00:15:01.501 "malloc4" 00:15:01.501 ], 00:15:01.501 "strip_size_kb": 64, 00:15:01.501 "superblock": false, 00:15:01.501 "method": "bdev_raid_create", 00:15:01.501 "req_id": 1 00:15:01.501 } 00:15:01.501 Got JSON-RPC error response 00:15:01.501 response: 00:15:01.501 { 00:15:01.501 "code": -17, 00:15:01.501 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:01.501 } 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 [2024-12-05 19:34:54.848958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.501 [2024-12-05 19:34:54.849028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.501 [2024-12-05 19:34:54.849058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:01.501 [2024-12-05 19:34:54.849076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.501 [2024-12-05 19:34:54.852878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.501 [2024-12-05 19:34:54.852947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.501 [2024-12-05 19:34:54.853093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:01.501 [2024-12-05 19:34:54.853243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.501 pt1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.501 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.501 "name": "raid_bdev1", 00:15:01.501 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:01.501 "strip_size_kb": 64, 00:15:01.501 "state": "configuring", 00:15:01.501 "raid_level": "concat", 00:15:01.501 "superblock": true, 00:15:01.501 "num_base_bdevs": 4, 00:15:01.501 "num_base_bdevs_discovered": 1, 00:15:01.501 "num_base_bdevs_operational": 4, 00:15:01.501 "base_bdevs_list": [ 00:15:01.501 { 00:15:01.501 "name": "pt1", 00:15:01.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.502 "is_configured": true, 00:15:01.502 "data_offset": 2048, 00:15:01.502 "data_size": 63488 00:15:01.502 }, 00:15:01.502 { 00:15:01.502 "name": null, 00:15:01.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.502 "is_configured": false, 00:15:01.502 "data_offset": 2048, 00:15:01.502 "data_size": 63488 00:15:01.502 }, 00:15:01.502 { 00:15:01.502 "name": null, 00:15:01.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.502 "is_configured": false, 00:15:01.502 "data_offset": 2048, 00:15:01.502 "data_size": 63488 00:15:01.502 }, 00:15:01.502 { 00:15:01.502 "name": null, 00:15:01.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.502 "is_configured": false, 00:15:01.502 "data_offset": 2048, 00:15:01.502 "data_size": 63488 00:15:01.502 } 00:15:01.502 ] 00:15:01.502 }' 00:15:01.502 19:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.502 19:34:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.069 [2024-12-05 19:34:55.373440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.069 [2024-12-05 19:34:55.373597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.069 [2024-12-05 19:34:55.373637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:02.069 [2024-12-05 19:34:55.373660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.069 [2024-12-05 19:34:55.374438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.069 [2024-12-05 19:34:55.374523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.069 [2024-12-05 19:34:55.374673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.069 [2024-12-05 19:34:55.374723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.069 pt2 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.069 [2024-12-05 19:34:55.381457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.069 "name": "raid_bdev1", 00:15:02.069 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:02.069 "strip_size_kb": 64, 00:15:02.069 "state": "configuring", 00:15:02.069 "raid_level": "concat", 00:15:02.069 "superblock": true, 00:15:02.069 "num_base_bdevs": 4, 00:15:02.069 "num_base_bdevs_discovered": 1, 00:15:02.069 "num_base_bdevs_operational": 4, 00:15:02.069 "base_bdevs_list": [ 00:15:02.069 { 00:15:02.069 "name": "pt1", 00:15:02.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.069 "is_configured": true, 00:15:02.069 "data_offset": 2048, 00:15:02.069 "data_size": 63488 00:15:02.069 }, 00:15:02.069 { 00:15:02.069 "name": null, 00:15:02.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.069 "is_configured": false, 00:15:02.069 "data_offset": 0, 00:15:02.069 "data_size": 63488 00:15:02.069 }, 00:15:02.069 { 00:15:02.069 "name": null, 00:15:02.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.069 "is_configured": false, 00:15:02.069 "data_offset": 2048, 00:15:02.069 "data_size": 63488 00:15:02.069 }, 00:15:02.069 { 00:15:02.069 "name": null, 00:15:02.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.069 "is_configured": false, 00:15:02.069 "data_offset": 2048, 00:15:02.069 "data_size": 63488 00:15:02.069 } 00:15:02.069 ] 00:15:02.069 }' 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.069 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.637 [2024-12-05 19:34:55.913603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.637 [2024-12-05 19:34:55.913766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.637 [2024-12-05 19:34:55.913808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:02.637 [2024-12-05 19:34:55.913826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.637 [2024-12-05 19:34:55.914543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.637 [2024-12-05 19:34:55.914601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.637 [2024-12-05 19:34:55.914789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.637 [2024-12-05 19:34:55.914829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.637 pt2 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.637 [2024-12-05 19:34:55.921523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.637 [2024-12-05 19:34:55.921620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.637 [2024-12-05 19:34:55.921653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:02.637 [2024-12-05 19:34:55.921672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.637 [2024-12-05 19:34:55.922205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.637 [2024-12-05 19:34:55.922251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.637 [2024-12-05 19:34:55.922344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:02.637 [2024-12-05 19:34:55.922386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.637 pt3 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.637 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 [2024-12-05 19:34:55.929494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.638 [2024-12-05 19:34:55.929583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.638 [2024-12-05 19:34:55.929614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:02.638 [2024-12-05 19:34:55.929631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.638 [2024-12-05 19:34:55.930181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.638 [2024-12-05 19:34:55.930258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.638 [2024-12-05 19:34:55.930352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:02.638 [2024-12-05 19:34:55.930403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.638 [2024-12-05 19:34:55.930592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.638 [2024-12-05 19:34:55.930621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:02.638 [2024-12-05 19:34:55.930977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:02.638 [2024-12-05 19:34:55.931238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.638 [2024-12-05 19:34:55.931275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:02.638 [2024-12-05 19:34:55.931450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.638 pt4 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.638 "name": "raid_bdev1", 00:15:02.638 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:02.638 "strip_size_kb": 64, 00:15:02.638 "state": "online", 00:15:02.638 "raid_level": "concat", 00:15:02.638 "superblock": true, 00:15:02.638 "num_base_bdevs": 4, 00:15:02.638 "num_base_bdevs_discovered": 4, 00:15:02.638 "num_base_bdevs_operational": 4, 00:15:02.638 "base_bdevs_list": [ 00:15:02.638 { 00:15:02.638 "name": "pt1", 00:15:02.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.638 "is_configured": true, 00:15:02.638 "data_offset": 2048, 00:15:02.638 "data_size": 63488 00:15:02.638 }, 00:15:02.638 { 00:15:02.638 "name": "pt2", 00:15:02.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.638 "is_configured": true, 00:15:02.638 "data_offset": 2048, 00:15:02.638 "data_size": 63488 00:15:02.638 }, 00:15:02.638 { 00:15:02.638 "name": "pt3", 00:15:02.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.638 "is_configured": true, 00:15:02.638 "data_offset": 2048, 00:15:02.638 "data_size": 63488 00:15:02.638 }, 00:15:02.638 { 00:15:02.638 "name": "pt4", 00:15:02.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.638 "is_configured": true, 00:15:02.638 "data_offset": 2048, 00:15:02.638 "data_size": 63488 00:15:02.638 } 00:15:02.638 ] 00:15:02.638 }' 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.638 19:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.206 [2024-12-05 19:34:56.458294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.206 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.206 "name": "raid_bdev1", 00:15:03.206 "aliases": [ 00:15:03.206 "17355001-450b-403b-b8b1-d8de6a025423" 00:15:03.206 ], 00:15:03.206 "product_name": "Raid Volume", 00:15:03.206 "block_size": 512, 00:15:03.206 "num_blocks": 253952, 00:15:03.206 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:03.206 "assigned_rate_limits": { 00:15:03.206 "rw_ios_per_sec": 0, 00:15:03.206 "rw_mbytes_per_sec": 0, 00:15:03.206 "r_mbytes_per_sec": 0, 00:15:03.206 "w_mbytes_per_sec": 0 00:15:03.206 }, 00:15:03.206 "claimed": false, 00:15:03.206 "zoned": false, 00:15:03.206 "supported_io_types": { 00:15:03.206 "read": true, 00:15:03.206 "write": true, 00:15:03.206 "unmap": true, 00:15:03.206 "flush": true, 00:15:03.206 "reset": true, 00:15:03.206 "nvme_admin": false, 00:15:03.206 "nvme_io": false, 00:15:03.206 "nvme_io_md": false, 00:15:03.206 "write_zeroes": true, 00:15:03.206 "zcopy": false, 00:15:03.206 "get_zone_info": false, 00:15:03.206 "zone_management": false, 00:15:03.206 "zone_append": false, 00:15:03.206 "compare": false, 00:15:03.206 "compare_and_write": false, 00:15:03.206 "abort": false, 00:15:03.206 "seek_hole": false, 00:15:03.207 "seek_data": false, 00:15:03.207 "copy": false, 00:15:03.207 "nvme_iov_md": false 00:15:03.207 }, 00:15:03.207 "memory_domains": [ 00:15:03.207 { 00:15:03.207 "dma_device_id": "system", 00:15:03.207 "dma_device_type": 1 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.207 "dma_device_type": 2 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "system", 00:15:03.207 "dma_device_type": 1 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.207 "dma_device_type": 2 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "system", 00:15:03.207 "dma_device_type": 1 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.207 "dma_device_type": 2 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "system", 00:15:03.207 "dma_device_type": 1 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.207 "dma_device_type": 2 00:15:03.207 } 00:15:03.207 ], 00:15:03.207 "driver_specific": { 00:15:03.207 "raid": { 00:15:03.207 "uuid": "17355001-450b-403b-b8b1-d8de6a025423", 00:15:03.207 "strip_size_kb": 64, 00:15:03.207 "state": "online", 00:15:03.207 "raid_level": "concat", 00:15:03.207 "superblock": true, 00:15:03.207 "num_base_bdevs": 4, 00:15:03.207 "num_base_bdevs_discovered": 4, 00:15:03.207 "num_base_bdevs_operational": 4, 00:15:03.207 "base_bdevs_list": [ 00:15:03.207 { 00:15:03.207 "name": "pt1", 00:15:03.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.207 "is_configured": true, 00:15:03.207 "data_offset": 2048, 00:15:03.207 "data_size": 63488 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "name": "pt2", 00:15:03.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.207 "is_configured": true, 00:15:03.207 "data_offset": 2048, 00:15:03.207 "data_size": 63488 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "name": "pt3", 00:15:03.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.207 "is_configured": true, 00:15:03.207 "data_offset": 2048, 00:15:03.207 "data_size": 63488 00:15:03.207 }, 00:15:03.207 { 00:15:03.207 "name": "pt4", 00:15:03.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.207 "is_configured": true, 00:15:03.207 "data_offset": 2048, 00:15:03.207 "data_size": 63488 00:15:03.207 } 00:15:03.207 ] 00:15:03.207 } 00:15:03.207 } 00:15:03.207 }' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:03.207 pt2 00:15:03.207 pt3 00:15:03.207 pt4' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.207 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.493 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 [2024-12-05 19:34:56.826288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 17355001-450b-403b-b8b1-d8de6a025423 '!=' 17355001-450b-403b-b8b1-d8de6a025423 ']' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72750 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72750 ']' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72750 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72750 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72750' 00:15:03.494 killing process with pid 72750 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72750 00:15:03.494 [2024-12-05 19:34:56.904685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.494 19:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72750 00:15:03.494 [2024-12-05 19:34:56.904834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.494 [2024-12-05 19:34:56.904965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.494 [2024-12-05 19:34:56.904995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:04.061 [2024-12-05 19:34:57.272180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.997 19:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:04.997 00:15:04.997 real 0m5.955s 00:15:04.997 user 0m8.828s 00:15:04.997 sys 0m0.925s 00:15:04.997 19:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.997 19:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.997 ************************************ 00:15:04.997 END TEST raid_superblock_test 00:15:04.997 ************************************ 00:15:05.256 19:34:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:05.256 19:34:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:05.256 19:34:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.256 19:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.256 ************************************ 00:15:05.256 START TEST raid_read_error_test 00:15:05.256 ************************************ 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:05.256 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KV47eOh8pr 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73015 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73015 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73015 ']' 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.257 19:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.257 [2024-12-05 19:34:58.577636] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:05.257 [2024-12-05 19:34:58.577882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73015 ] 00:15:05.515 [2024-12-05 19:34:58.760444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.515 [2024-12-05 19:34:58.911204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.773 [2024-12-05 19:34:59.143266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.773 [2024-12-05 19:34:59.143318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 BaseBdev1_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 true 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 [2024-12-05 19:34:59.650282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:06.350 [2024-12-05 19:34:59.650368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.350 [2024-12-05 19:34:59.650406] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:06.350 [2024-12-05 19:34:59.650432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.350 [2024-12-05 19:34:59.653836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.350 [2024-12-05 19:34:59.653889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.350 BaseBdev1 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 BaseBdev2_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 true 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.350 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.350 [2024-12-05 19:34:59.717809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:06.350 [2024-12-05 19:34:59.717894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.350 [2024-12-05 19:34:59.717926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:06.350 [2024-12-05 19:34:59.717948] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.350 [2024-12-05 19:34:59.721235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.350 [2024-12-05 19:34:59.721456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:06.350 BaseBdev2 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 BaseBdev3_malloc 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 true 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.351 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.351 [2024-12-05 19:34:59.790534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:06.609 [2024-12-05 19:34:59.790823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.609 [2024-12-05 19:34:59.790871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:06.609 [2024-12-05 19:34:59.790907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.609 [2024-12-05 19:34:59.794291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.609 BaseBdev3 00:15:06.609 [2024-12-05 19:34:59.794524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.609 BaseBdev4_malloc 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.609 true 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.609 [2024-12-05 19:34:59.854554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:06.609 [2024-12-05 19:34:59.854648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.609 [2024-12-05 19:34:59.854681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:06.609 [2024-12-05 19:34:59.854703] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.609 [2024-12-05 19:34:59.857845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.609 [2024-12-05 19:34:59.857920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:06.609 BaseBdev4 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.609 [2024-12-05 19:34:59.862637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.609 [2024-12-05 19:34:59.865476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.609 [2024-12-05 19:34:59.865792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.609 [2024-12-05 19:34:59.866040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:06.609 [2024-12-05 19:34:59.866409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:06.609 [2024-12-05 19:34:59.866437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:06.609 [2024-12-05 19:34:59.866792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:06.609 [2024-12-05 19:34:59.867033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:06.609 [2024-12-05 19:34:59.867070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:06.609 [2024-12-05 19:34:59.867360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.609 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.610 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.610 "name": "raid_bdev1", 00:15:06.610 "uuid": "98b66225-82bd-44b8-ba0d-0dcfc1ec08c5", 00:15:06.610 "strip_size_kb": 64, 00:15:06.610 "state": "online", 00:15:06.610 "raid_level": "concat", 00:15:06.610 "superblock": true, 00:15:06.610 "num_base_bdevs": 4, 00:15:06.610 "num_base_bdevs_discovered": 4, 00:15:06.610 "num_base_bdevs_operational": 4, 00:15:06.610 "base_bdevs_list": [ 00:15:06.610 { 00:15:06.610 "name": "BaseBdev1", 00:15:06.610 "uuid": "70b40340-b543-5663-9fcc-6472e32107cd", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 2048, 00:15:06.610 "data_size": 63488 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": "BaseBdev2", 00:15:06.610 "uuid": "56bd883e-a7dc-5295-a691-0843ac5d0dec", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 2048, 00:15:06.610 "data_size": 63488 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": "BaseBdev3", 00:15:06.610 "uuid": "d037680c-2338-5dd1-903c-2785ba7e2e43", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 2048, 00:15:06.610 "data_size": 63488 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": "BaseBdev4", 00:15:06.610 "uuid": "aab11217-7f21-56ca-b005-47302656f22f", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 2048, 00:15:06.610 "data_size": 63488 00:15:06.610 } 00:15:06.610 ] 00:15:06.610 }' 00:15:06.610 19:34:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.610 19:34:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.188 19:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:07.188 19:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:07.188 [2024-12-05 19:35:00.549034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.247 "name": "raid_bdev1", 00:15:08.247 "uuid": "98b66225-82bd-44b8-ba0d-0dcfc1ec08c5", 00:15:08.247 "strip_size_kb": 64, 00:15:08.247 "state": "online", 00:15:08.247 "raid_level": "concat", 00:15:08.247 "superblock": true, 00:15:08.247 "num_base_bdevs": 4, 00:15:08.247 "num_base_bdevs_discovered": 4, 00:15:08.247 "num_base_bdevs_operational": 4, 00:15:08.247 "base_bdevs_list": [ 00:15:08.247 { 00:15:08.247 "name": "BaseBdev1", 00:15:08.247 "uuid": "70b40340-b543-5663-9fcc-6472e32107cd", 00:15:08.247 "is_configured": true, 00:15:08.247 "data_offset": 2048, 00:15:08.247 "data_size": 63488 00:15:08.247 }, 00:15:08.247 { 00:15:08.247 "name": "BaseBdev2", 00:15:08.247 "uuid": "56bd883e-a7dc-5295-a691-0843ac5d0dec", 00:15:08.247 "is_configured": true, 00:15:08.247 "data_offset": 2048, 00:15:08.247 "data_size": 63488 00:15:08.247 }, 00:15:08.247 { 00:15:08.247 "name": "BaseBdev3", 00:15:08.247 "uuid": "d037680c-2338-5dd1-903c-2785ba7e2e43", 00:15:08.247 "is_configured": true, 00:15:08.247 "data_offset": 2048, 00:15:08.247 "data_size": 63488 00:15:08.247 }, 00:15:08.247 { 00:15:08.247 "name": "BaseBdev4", 00:15:08.247 "uuid": "aab11217-7f21-56ca-b005-47302656f22f", 00:15:08.247 "is_configured": true, 00:15:08.247 "data_offset": 2048, 00:15:08.247 "data_size": 63488 00:15:08.247 } 00:15:08.247 ] 00:15:08.247 }' 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.247 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.507 [2024-12-05 19:35:01.927277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.507 [2024-12-05 19:35:01.927322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.507 [2024-12-05 19:35:01.931019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.507 [2024-12-05 19:35:01.931274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.507 [2024-12-05 19:35:01.931391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.507 [2024-12-05 19:35:01.931707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73015 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73015 ']' 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73015 00:15:08.507 { 00:15:08.507 "results": [ 00:15:08.507 { 00:15:08.507 "job": "raid_bdev1", 00:15:08.507 "core_mask": "0x1", 00:15:08.507 "workload": "randrw", 00:15:08.507 "percentage": 50, 00:15:08.507 "status": "finished", 00:15:08.507 "queue_depth": 1, 00:15:08.507 "io_size": 131072, 00:15:08.507 "runtime": 1.375621, 00:15:08.507 "iops": 9087.532103682628, 00:15:08.507 "mibps": 1135.9415129603285, 00:15:08.507 "io_failed": 1, 00:15:08.507 "io_timeout": 0, 00:15:08.507 "avg_latency_us": 154.43608324486263, 00:15:08.507 "min_latency_us": 41.192727272727275, 00:15:08.507 "max_latency_us": 1794.7927272727272 00:15:08.507 } 00:15:08.507 ], 00:15:08.507 "core_count": 1 00:15:08.507 } 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.507 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73015 00:15:08.766 killing process with pid 73015 00:15:08.766 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.766 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.766 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73015' 00:15:08.766 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73015 00:15:08.766 [2024-12-05 19:35:01.966032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.766 19:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73015 00:15:09.023 [2024-12-05 19:35:02.284667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KV47eOh8pr 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:10.398 ************************************ 00:15:10.398 END TEST raid_read_error_test 00:15:10.398 ************************************ 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:15:10.398 00:15:10.398 real 0m5.051s 00:15:10.398 user 0m6.101s 00:15:10.398 sys 0m0.703s 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.398 19:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.398 19:35:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:10.398 19:35:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:10.398 19:35:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.398 19:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.398 ************************************ 00:15:10.398 START TEST raid_write_error_test 00:15:10.398 ************************************ 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tCpmhx6AF1 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73166 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73166 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73166 ']' 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.398 19:35:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.398 [2024-12-05 19:35:03.688547] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:10.398 [2024-12-05 19:35:03.689100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73166 ] 00:15:10.657 [2024-12-05 19:35:03.868432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.657 [2024-12-05 19:35:04.021821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.916 [2024-12-05 19:35:04.243728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.916 [2024-12-05 19:35:04.243802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.482 BaseBdev1_malloc 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:11.482 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 true 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 [2024-12-05 19:35:04.776374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:11.483 [2024-12-05 19:35:04.776613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.483 [2024-12-05 19:35:04.776821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:11.483 [2024-12-05 19:35:04.776988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.483 [2024-12-05 19:35:04.780076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.483 BaseBdev1 00:15:11.483 [2024-12-05 19:35:04.780271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 BaseBdev2_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 true 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 [2024-12-05 19:35:04.837775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:11.483 [2024-12-05 19:35:04.838042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.483 [2024-12-05 19:35:04.838125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:11.483 [2024-12-05 19:35:04.838393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.483 [2024-12-05 19:35:04.841894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.483 [2024-12-05 19:35:04.842091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:11.483 BaseBdev2 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 BaseBdev3_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 true 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.483 [2024-12-05 19:35:04.908391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:11.483 [2024-12-05 19:35:04.908655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.483 [2024-12-05 19:35:04.908714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:11.483 [2024-12-05 19:35:04.908754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.483 [2024-12-05 19:35:04.911774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.483 [2024-12-05 19:35:04.911829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:11.483 BaseBdev3 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.483 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.742 BaseBdev4_malloc 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.742 true 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.742 [2024-12-05 19:35:04.966711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:11.742 [2024-12-05 19:35:04.966972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.742 [2024-12-05 19:35:04.967017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:11.742 [2024-12-05 19:35:04.967041] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.742 [2024-12-05 19:35:04.970244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.742 [2024-12-05 19:35:04.970447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:11.742 BaseBdev4 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.742 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.742 [2024-12-05 19:35:04.974873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.742 [2024-12-05 19:35:04.977587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.742 [2024-12-05 19:35:04.977945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.742 [2024-12-05 19:35:04.978228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.742 [2024-12-05 19:35:04.978757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:11.743 [2024-12-05 19:35:04.978920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:11.743 [2024-12-05 19:35:04.979374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:11.743 [2024-12-05 19:35:04.979808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:11.743 [2024-12-05 19:35:04.979953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:11.743 [2024-12-05 19:35:04.980296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.743 19:35:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.743 19:35:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.743 19:35:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.743 "name": "raid_bdev1", 00:15:11.743 "uuid": "f70781c7-dc94-4aa5-8c33-082f9063aa79", 00:15:11.743 "strip_size_kb": 64, 00:15:11.743 "state": "online", 00:15:11.743 "raid_level": "concat", 00:15:11.743 "superblock": true, 00:15:11.743 "num_base_bdevs": 4, 00:15:11.743 "num_base_bdevs_discovered": 4, 00:15:11.743 "num_base_bdevs_operational": 4, 00:15:11.743 "base_bdevs_list": [ 00:15:11.743 { 00:15:11.743 "name": "BaseBdev1", 00:15:11.743 "uuid": "3aed123f-1805-5f23-b8f0-60c6443e248a", 00:15:11.743 "is_configured": true, 00:15:11.743 "data_offset": 2048, 00:15:11.743 "data_size": 63488 00:15:11.743 }, 00:15:11.743 { 00:15:11.743 "name": "BaseBdev2", 00:15:11.743 "uuid": "fdf05c9d-c176-582d-861d-57cc0402cb40", 00:15:11.743 "is_configured": true, 00:15:11.743 "data_offset": 2048, 00:15:11.743 "data_size": 63488 00:15:11.743 }, 00:15:11.743 { 00:15:11.743 "name": "BaseBdev3", 00:15:11.743 "uuid": "a4212d06-0958-5edb-939b-a330e41cf18f", 00:15:11.743 "is_configured": true, 00:15:11.743 "data_offset": 2048, 00:15:11.743 "data_size": 63488 00:15:11.743 }, 00:15:11.743 { 00:15:11.743 "name": "BaseBdev4", 00:15:11.743 "uuid": "0b74bc61-79dd-530a-bc0a-68b8f2b94a60", 00:15:11.743 "is_configured": true, 00:15:11.743 "data_offset": 2048, 00:15:11.743 "data_size": 63488 00:15:11.743 } 00:15:11.743 ] 00:15:11.743 }' 00:15:11.743 19:35:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.743 19:35:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.309 19:35:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:12.309 19:35:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:12.309 [2024-12-05 19:35:05.636752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.245 "name": "raid_bdev1", 00:15:13.245 "uuid": "f70781c7-dc94-4aa5-8c33-082f9063aa79", 00:15:13.245 "strip_size_kb": 64, 00:15:13.245 "state": "online", 00:15:13.245 "raid_level": "concat", 00:15:13.245 "superblock": true, 00:15:13.245 "num_base_bdevs": 4, 00:15:13.245 "num_base_bdevs_discovered": 4, 00:15:13.245 "num_base_bdevs_operational": 4, 00:15:13.245 "base_bdevs_list": [ 00:15:13.245 { 00:15:13.245 "name": "BaseBdev1", 00:15:13.245 "uuid": "3aed123f-1805-5f23-b8f0-60c6443e248a", 00:15:13.245 "is_configured": true, 00:15:13.245 "data_offset": 2048, 00:15:13.245 "data_size": 63488 00:15:13.245 }, 00:15:13.245 { 00:15:13.245 "name": "BaseBdev2", 00:15:13.245 "uuid": "fdf05c9d-c176-582d-861d-57cc0402cb40", 00:15:13.245 "is_configured": true, 00:15:13.245 "data_offset": 2048, 00:15:13.245 "data_size": 63488 00:15:13.245 }, 00:15:13.245 { 00:15:13.245 "name": "BaseBdev3", 00:15:13.245 "uuid": "a4212d06-0958-5edb-939b-a330e41cf18f", 00:15:13.245 "is_configured": true, 00:15:13.245 "data_offset": 2048, 00:15:13.245 "data_size": 63488 00:15:13.245 }, 00:15:13.245 { 00:15:13.245 "name": "BaseBdev4", 00:15:13.245 "uuid": "0b74bc61-79dd-530a-bc0a-68b8f2b94a60", 00:15:13.245 "is_configured": true, 00:15:13.245 "data_offset": 2048, 00:15:13.245 "data_size": 63488 00:15:13.245 } 00:15:13.245 ] 00:15:13.245 }' 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.245 19:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.879 [2024-12-05 19:35:07.064946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.879 [2024-12-05 19:35:07.065017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.879 [2024-12-05 19:35:07.070308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.879 [2024-12-05 19:35:07.070509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.879 [2024-12-05 19:35:07.070618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.879 [2024-12-05 19:35:07.070665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:13.879 { 00:15:13.879 "results": [ 00:15:13.879 { 00:15:13.879 "job": "raid_bdev1", 00:15:13.879 "core_mask": "0x1", 00:15:13.879 "workload": "randrw", 00:15:13.879 "percentage": 50, 00:15:13.879 "status": "finished", 00:15:13.879 "queue_depth": 1, 00:15:13.879 "io_size": 131072, 00:15:13.879 "runtime": 1.425818, 00:15:13.879 "iops": 9127.39213560216, 00:15:13.879 "mibps": 1140.92401695027, 00:15:13.879 "io_failed": 1, 00:15:13.879 "io_timeout": 0, 00:15:13.879 "avg_latency_us": 153.13552893514478, 00:15:13.879 "min_latency_us": 40.02909090909091, 00:15:13.879 "max_latency_us": 1854.370909090909 00:15:13.879 } 00:15:13.879 ], 00:15:13.879 "core_count": 1 00:15:13.879 } 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73166 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73166 ']' 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73166 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73166 00:15:13.879 killing process with pid 73166 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73166' 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73166 00:15:13.879 [2024-12-05 19:35:07.109722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.879 19:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73166 00:15:14.137 [2024-12-05 19:35:07.457665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.511 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tCpmhx6AF1 00:15:15.511 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:15.511 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:15.511 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:15.511 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:15.512 ************************************ 00:15:15.512 END TEST raid_write_error_test 00:15:15.512 ************************************ 00:15:15.512 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:15.512 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:15.512 19:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:15.512 00:15:15.512 real 0m5.060s 00:15:15.512 user 0m6.179s 00:15:15.512 sys 0m0.675s 00:15:15.512 19:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.512 19:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.512 19:35:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:15.512 19:35:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:15.512 19:35:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:15.512 19:35:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.512 19:35:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.512 ************************************ 00:15:15.512 START TEST raid_state_function_test 00:15:15.512 ************************************ 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:15.512 Process raid pid: 73310 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73310 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73310' 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73310 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73310 ']' 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.512 19:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.512 [2024-12-05 19:35:08.803828] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:15.512 [2024-12-05 19:35:08.804012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.772 [2024-12-05 19:35:08.991967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.772 [2024-12-05 19:35:09.125690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.031 [2024-12-05 19:35:09.337861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.031 [2024-12-05 19:35:09.337916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.597 [2024-12-05 19:35:09.836640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:16.597 [2024-12-05 19:35:09.836753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:16.597 [2024-12-05 19:35:09.836773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.597 [2024-12-05 19:35:09.836790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.597 [2024-12-05 19:35:09.836800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:16.597 [2024-12-05 19:35:09.836815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:16.597 [2024-12-05 19:35:09.836825] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:16.597 [2024-12-05 19:35:09.836838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.597 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.598 "name": "Existed_Raid", 00:15:16.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.598 "strip_size_kb": 0, 00:15:16.598 "state": "configuring", 00:15:16.598 "raid_level": "raid1", 00:15:16.598 "superblock": false, 00:15:16.598 "num_base_bdevs": 4, 00:15:16.598 "num_base_bdevs_discovered": 0, 00:15:16.598 "num_base_bdevs_operational": 4, 00:15:16.598 "base_bdevs_list": [ 00:15:16.598 { 00:15:16.598 "name": "BaseBdev1", 00:15:16.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.598 "is_configured": false, 00:15:16.598 "data_offset": 0, 00:15:16.598 "data_size": 0 00:15:16.598 }, 00:15:16.598 { 00:15:16.598 "name": "BaseBdev2", 00:15:16.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.598 "is_configured": false, 00:15:16.598 "data_offset": 0, 00:15:16.598 "data_size": 0 00:15:16.598 }, 00:15:16.598 { 00:15:16.598 "name": "BaseBdev3", 00:15:16.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.598 "is_configured": false, 00:15:16.598 "data_offset": 0, 00:15:16.598 "data_size": 0 00:15:16.598 }, 00:15:16.598 { 00:15:16.598 "name": "BaseBdev4", 00:15:16.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.598 "is_configured": false, 00:15:16.598 "data_offset": 0, 00:15:16.598 "data_size": 0 00:15:16.598 } 00:15:16.598 ] 00:15:16.598 }' 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.598 19:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 [2024-12-05 19:35:10.324803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.163 [2024-12-05 19:35:10.324867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 [2024-12-05 19:35:10.332760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.163 [2024-12-05 19:35:10.332971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.163 [2024-12-05 19:35:10.333102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.163 [2024-12-05 19:35:10.333255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.163 [2024-12-05 19:35:10.333368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:17.163 [2024-12-05 19:35:10.333492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:17.163 [2024-12-05 19:35:10.333602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:17.163 [2024-12-05 19:35:10.333667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 [2024-12-05 19:35:10.379017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.163 BaseBdev1 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 [ 00:15:17.163 { 00:15:17.163 "name": "BaseBdev1", 00:15:17.163 "aliases": [ 00:15:17.163 "218346dc-24a7-4c19-b252-77d3ed281ea2" 00:15:17.163 ], 00:15:17.163 "product_name": "Malloc disk", 00:15:17.163 "block_size": 512, 00:15:17.163 "num_blocks": 65536, 00:15:17.163 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:17.163 "assigned_rate_limits": { 00:15:17.163 "rw_ios_per_sec": 0, 00:15:17.163 "rw_mbytes_per_sec": 0, 00:15:17.163 "r_mbytes_per_sec": 0, 00:15:17.163 "w_mbytes_per_sec": 0 00:15:17.163 }, 00:15:17.163 "claimed": true, 00:15:17.163 "claim_type": "exclusive_write", 00:15:17.163 "zoned": false, 00:15:17.163 "supported_io_types": { 00:15:17.163 "read": true, 00:15:17.163 "write": true, 00:15:17.163 "unmap": true, 00:15:17.163 "flush": true, 00:15:17.163 "reset": true, 00:15:17.163 "nvme_admin": false, 00:15:17.163 "nvme_io": false, 00:15:17.163 "nvme_io_md": false, 00:15:17.163 "write_zeroes": true, 00:15:17.163 "zcopy": true, 00:15:17.163 "get_zone_info": false, 00:15:17.163 "zone_management": false, 00:15:17.163 "zone_append": false, 00:15:17.163 "compare": false, 00:15:17.163 "compare_and_write": false, 00:15:17.163 "abort": true, 00:15:17.163 "seek_hole": false, 00:15:17.163 "seek_data": false, 00:15:17.163 "copy": true, 00:15:17.163 "nvme_iov_md": false 00:15:17.163 }, 00:15:17.163 "memory_domains": [ 00:15:17.163 { 00:15:17.163 "dma_device_id": "system", 00:15:17.163 "dma_device_type": 1 00:15:17.163 }, 00:15:17.163 { 00:15:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.163 "dma_device_type": 2 00:15:17.163 } 00:15:17.163 ], 00:15:17.163 "driver_specific": {} 00:15:17.163 } 00:15:17.163 ] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.163 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.163 "name": "Existed_Raid", 00:15:17.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.163 "strip_size_kb": 0, 00:15:17.163 "state": "configuring", 00:15:17.163 "raid_level": "raid1", 00:15:17.163 "superblock": false, 00:15:17.163 "num_base_bdevs": 4, 00:15:17.163 "num_base_bdevs_discovered": 1, 00:15:17.164 "num_base_bdevs_operational": 4, 00:15:17.164 "base_bdevs_list": [ 00:15:17.164 { 00:15:17.164 "name": "BaseBdev1", 00:15:17.164 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:17.164 "is_configured": true, 00:15:17.164 "data_offset": 0, 00:15:17.164 "data_size": 65536 00:15:17.164 }, 00:15:17.164 { 00:15:17.164 "name": "BaseBdev2", 00:15:17.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.164 "is_configured": false, 00:15:17.164 "data_offset": 0, 00:15:17.164 "data_size": 0 00:15:17.164 }, 00:15:17.164 { 00:15:17.164 "name": "BaseBdev3", 00:15:17.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.164 "is_configured": false, 00:15:17.164 "data_offset": 0, 00:15:17.164 "data_size": 0 00:15:17.164 }, 00:15:17.164 { 00:15:17.164 "name": "BaseBdev4", 00:15:17.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.164 "is_configured": false, 00:15:17.164 "data_offset": 0, 00:15:17.164 "data_size": 0 00:15:17.164 } 00:15:17.164 ] 00:15:17.164 }' 00:15:17.164 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.164 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.728 [2024-12-05 19:35:10.927395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.728 [2024-12-05 19:35:10.927480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.728 [2024-12-05 19:35:10.935372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.728 [2024-12-05 19:35:10.938233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.728 [2024-12-05 19:35:10.938439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.728 [2024-12-05 19:35:10.938569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:17.728 [2024-12-05 19:35:10.938605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:17.728 [2024-12-05 19:35:10.938619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:17.728 [2024-12-05 19:35:10.938634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.728 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.729 "name": "Existed_Raid", 00:15:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.729 "strip_size_kb": 0, 00:15:17.729 "state": "configuring", 00:15:17.729 "raid_level": "raid1", 00:15:17.729 "superblock": false, 00:15:17.729 "num_base_bdevs": 4, 00:15:17.729 "num_base_bdevs_discovered": 1, 00:15:17.729 "num_base_bdevs_operational": 4, 00:15:17.729 "base_bdevs_list": [ 00:15:17.729 { 00:15:17.729 "name": "BaseBdev1", 00:15:17.729 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:17.729 "is_configured": true, 00:15:17.729 "data_offset": 0, 00:15:17.729 "data_size": 65536 00:15:17.729 }, 00:15:17.729 { 00:15:17.729 "name": "BaseBdev2", 00:15:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.729 "is_configured": false, 00:15:17.729 "data_offset": 0, 00:15:17.729 "data_size": 0 00:15:17.729 }, 00:15:17.729 { 00:15:17.729 "name": "BaseBdev3", 00:15:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.729 "is_configured": false, 00:15:17.729 "data_offset": 0, 00:15:17.729 "data_size": 0 00:15:17.729 }, 00:15:17.729 { 00:15:17.729 "name": "BaseBdev4", 00:15:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.729 "is_configured": false, 00:15:17.729 "data_offset": 0, 00:15:17.729 "data_size": 0 00:15:17.729 } 00:15:17.729 ] 00:15:17.729 }' 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.729 19:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 [2024-12-05 19:35:11.507589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.295 BaseBdev2 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 [ 00:15:18.295 { 00:15:18.295 "name": "BaseBdev2", 00:15:18.295 "aliases": [ 00:15:18.295 "ec53913d-ac1d-497d-a8d5-f4e62a6982ca" 00:15:18.295 ], 00:15:18.295 "product_name": "Malloc disk", 00:15:18.295 "block_size": 512, 00:15:18.295 "num_blocks": 65536, 00:15:18.295 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:18.295 "assigned_rate_limits": { 00:15:18.295 "rw_ios_per_sec": 0, 00:15:18.295 "rw_mbytes_per_sec": 0, 00:15:18.295 "r_mbytes_per_sec": 0, 00:15:18.295 "w_mbytes_per_sec": 0 00:15:18.295 }, 00:15:18.295 "claimed": true, 00:15:18.295 "claim_type": "exclusive_write", 00:15:18.295 "zoned": false, 00:15:18.295 "supported_io_types": { 00:15:18.295 "read": true, 00:15:18.295 "write": true, 00:15:18.295 "unmap": true, 00:15:18.295 "flush": true, 00:15:18.295 "reset": true, 00:15:18.295 "nvme_admin": false, 00:15:18.295 "nvme_io": false, 00:15:18.295 "nvme_io_md": false, 00:15:18.295 "write_zeroes": true, 00:15:18.295 "zcopy": true, 00:15:18.295 "get_zone_info": false, 00:15:18.295 "zone_management": false, 00:15:18.295 "zone_append": false, 00:15:18.295 "compare": false, 00:15:18.295 "compare_and_write": false, 00:15:18.295 "abort": true, 00:15:18.295 "seek_hole": false, 00:15:18.295 "seek_data": false, 00:15:18.295 "copy": true, 00:15:18.295 "nvme_iov_md": false 00:15:18.295 }, 00:15:18.295 "memory_domains": [ 00:15:18.295 { 00:15:18.295 "dma_device_id": "system", 00:15:18.295 "dma_device_type": 1 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.295 "dma_device_type": 2 00:15:18.295 } 00:15:18.295 ], 00:15:18.295 "driver_specific": {} 00:15:18.295 } 00:15:18.295 ] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.295 "name": "Existed_Raid", 00:15:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.295 "strip_size_kb": 0, 00:15:18.295 "state": "configuring", 00:15:18.295 "raid_level": "raid1", 00:15:18.295 "superblock": false, 00:15:18.295 "num_base_bdevs": 4, 00:15:18.295 "num_base_bdevs_discovered": 2, 00:15:18.295 "num_base_bdevs_operational": 4, 00:15:18.295 "base_bdevs_list": [ 00:15:18.295 { 00:15:18.295 "name": "BaseBdev1", 00:15:18.295 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:18.295 "is_configured": true, 00:15:18.295 "data_offset": 0, 00:15:18.295 "data_size": 65536 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": "BaseBdev2", 00:15:18.295 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:18.295 "is_configured": true, 00:15:18.295 "data_offset": 0, 00:15:18.295 "data_size": 65536 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": "BaseBdev3", 00:15:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.295 "is_configured": false, 00:15:18.295 "data_offset": 0, 00:15:18.295 "data_size": 0 00:15:18.295 }, 00:15:18.295 { 00:15:18.295 "name": "BaseBdev4", 00:15:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.295 "is_configured": false, 00:15:18.295 "data_offset": 0, 00:15:18.295 "data_size": 0 00:15:18.295 } 00:15:18.295 ] 00:15:18.295 }' 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.295 19:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.861 [2024-12-05 19:35:12.091022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.861 BaseBdev3 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.861 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.861 [ 00:15:18.861 { 00:15:18.861 "name": "BaseBdev3", 00:15:18.861 "aliases": [ 00:15:18.861 "2331a5d2-8bee-4b55-96c7-a2c954eda3d1" 00:15:18.861 ], 00:15:18.861 "product_name": "Malloc disk", 00:15:18.861 "block_size": 512, 00:15:18.861 "num_blocks": 65536, 00:15:18.861 "uuid": "2331a5d2-8bee-4b55-96c7-a2c954eda3d1", 00:15:18.861 "assigned_rate_limits": { 00:15:18.861 "rw_ios_per_sec": 0, 00:15:18.861 "rw_mbytes_per_sec": 0, 00:15:18.861 "r_mbytes_per_sec": 0, 00:15:18.861 "w_mbytes_per_sec": 0 00:15:18.861 }, 00:15:18.861 "claimed": true, 00:15:18.861 "claim_type": "exclusive_write", 00:15:18.861 "zoned": false, 00:15:18.861 "supported_io_types": { 00:15:18.861 "read": true, 00:15:18.861 "write": true, 00:15:18.861 "unmap": true, 00:15:18.861 "flush": true, 00:15:18.861 "reset": true, 00:15:18.861 "nvme_admin": false, 00:15:18.861 "nvme_io": false, 00:15:18.862 "nvme_io_md": false, 00:15:18.862 "write_zeroes": true, 00:15:18.862 "zcopy": true, 00:15:18.862 "get_zone_info": false, 00:15:18.862 "zone_management": false, 00:15:18.862 "zone_append": false, 00:15:18.862 "compare": false, 00:15:18.862 "compare_and_write": false, 00:15:18.862 "abort": true, 00:15:18.862 "seek_hole": false, 00:15:18.862 "seek_data": false, 00:15:18.862 "copy": true, 00:15:18.862 "nvme_iov_md": false 00:15:18.862 }, 00:15:18.862 "memory_domains": [ 00:15:18.862 { 00:15:18.862 "dma_device_id": "system", 00:15:18.862 "dma_device_type": 1 00:15:18.862 }, 00:15:18.862 { 00:15:18.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.862 "dma_device_type": 2 00:15:18.862 } 00:15:18.862 ], 00:15:18.862 "driver_specific": {} 00:15:18.862 } 00:15:18.862 ] 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.862 "name": "Existed_Raid", 00:15:18.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.862 "strip_size_kb": 0, 00:15:18.862 "state": "configuring", 00:15:18.862 "raid_level": "raid1", 00:15:18.862 "superblock": false, 00:15:18.862 "num_base_bdevs": 4, 00:15:18.862 "num_base_bdevs_discovered": 3, 00:15:18.862 "num_base_bdevs_operational": 4, 00:15:18.862 "base_bdevs_list": [ 00:15:18.862 { 00:15:18.862 "name": "BaseBdev1", 00:15:18.862 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:18.862 "is_configured": true, 00:15:18.862 "data_offset": 0, 00:15:18.862 "data_size": 65536 00:15:18.862 }, 00:15:18.862 { 00:15:18.862 "name": "BaseBdev2", 00:15:18.862 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:18.862 "is_configured": true, 00:15:18.862 "data_offset": 0, 00:15:18.862 "data_size": 65536 00:15:18.862 }, 00:15:18.862 { 00:15:18.862 "name": "BaseBdev3", 00:15:18.862 "uuid": "2331a5d2-8bee-4b55-96c7-a2c954eda3d1", 00:15:18.862 "is_configured": true, 00:15:18.862 "data_offset": 0, 00:15:18.862 "data_size": 65536 00:15:18.862 }, 00:15:18.862 { 00:15:18.862 "name": "BaseBdev4", 00:15:18.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.862 "is_configured": false, 00:15:18.862 "data_offset": 0, 00:15:18.862 "data_size": 0 00:15:18.862 } 00:15:18.862 ] 00:15:18.862 }' 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.862 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 [2024-12-05 19:35:12.669934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.429 [2024-12-05 19:35:12.670005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:19.429 [2024-12-05 19:35:12.670018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:19.429 [2024-12-05 19:35:12.670361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:19.429 [2024-12-05 19:35:12.670593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:19.429 [2024-12-05 19:35:12.670616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:19.429 [2024-12-05 19:35:12.670986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.429 BaseBdev4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 [ 00:15:19.429 { 00:15:19.429 "name": "BaseBdev4", 00:15:19.429 "aliases": [ 00:15:19.429 "c4978566-b081-4e14-8287-2e048b481053" 00:15:19.429 ], 00:15:19.429 "product_name": "Malloc disk", 00:15:19.429 "block_size": 512, 00:15:19.429 "num_blocks": 65536, 00:15:19.429 "uuid": "c4978566-b081-4e14-8287-2e048b481053", 00:15:19.429 "assigned_rate_limits": { 00:15:19.429 "rw_ios_per_sec": 0, 00:15:19.429 "rw_mbytes_per_sec": 0, 00:15:19.429 "r_mbytes_per_sec": 0, 00:15:19.429 "w_mbytes_per_sec": 0 00:15:19.429 }, 00:15:19.429 "claimed": true, 00:15:19.429 "claim_type": "exclusive_write", 00:15:19.429 "zoned": false, 00:15:19.429 "supported_io_types": { 00:15:19.429 "read": true, 00:15:19.429 "write": true, 00:15:19.429 "unmap": true, 00:15:19.429 "flush": true, 00:15:19.429 "reset": true, 00:15:19.429 "nvme_admin": false, 00:15:19.429 "nvme_io": false, 00:15:19.429 "nvme_io_md": false, 00:15:19.429 "write_zeroes": true, 00:15:19.429 "zcopy": true, 00:15:19.429 "get_zone_info": false, 00:15:19.429 "zone_management": false, 00:15:19.429 "zone_append": false, 00:15:19.429 "compare": false, 00:15:19.429 "compare_and_write": false, 00:15:19.429 "abort": true, 00:15:19.429 "seek_hole": false, 00:15:19.429 "seek_data": false, 00:15:19.429 "copy": true, 00:15:19.429 "nvme_iov_md": false 00:15:19.429 }, 00:15:19.429 "memory_domains": [ 00:15:19.429 { 00:15:19.429 "dma_device_id": "system", 00:15:19.429 "dma_device_type": 1 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.429 "dma_device_type": 2 00:15:19.429 } 00:15:19.429 ], 00:15:19.429 "driver_specific": {} 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.429 "name": "Existed_Raid", 00:15:19.429 "uuid": "e6422d04-8f87-46b4-b6af-a3f6184e57ca", 00:15:19.429 "strip_size_kb": 0, 00:15:19.429 "state": "online", 00:15:19.429 "raid_level": "raid1", 00:15:19.429 "superblock": false, 00:15:19.429 "num_base_bdevs": 4, 00:15:19.429 "num_base_bdevs_discovered": 4, 00:15:19.429 "num_base_bdevs_operational": 4, 00:15:19.429 "base_bdevs_list": [ 00:15:19.429 { 00:15:19.429 "name": "BaseBdev1", 00:15:19.429 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:19.429 "is_configured": true, 00:15:19.429 "data_offset": 0, 00:15:19.429 "data_size": 65536 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "name": "BaseBdev2", 00:15:19.429 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:19.429 "is_configured": true, 00:15:19.429 "data_offset": 0, 00:15:19.429 "data_size": 65536 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "name": "BaseBdev3", 00:15:19.429 "uuid": "2331a5d2-8bee-4b55-96c7-a2c954eda3d1", 00:15:19.429 "is_configured": true, 00:15:19.429 "data_offset": 0, 00:15:19.429 "data_size": 65536 00:15:19.429 }, 00:15:19.429 { 00:15:19.429 "name": "BaseBdev4", 00:15:19.429 "uuid": "c4978566-b081-4e14-8287-2e048b481053", 00:15:19.429 "is_configured": true, 00:15:19.429 "data_offset": 0, 00:15:19.429 "data_size": 65536 00:15:19.429 } 00:15:19.429 ] 00:15:19.429 }' 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.429 19:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.998 [2024-12-05 19:35:13.254544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.998 "name": "Existed_Raid", 00:15:19.998 "aliases": [ 00:15:19.998 "e6422d04-8f87-46b4-b6af-a3f6184e57ca" 00:15:19.998 ], 00:15:19.998 "product_name": "Raid Volume", 00:15:19.998 "block_size": 512, 00:15:19.998 "num_blocks": 65536, 00:15:19.998 "uuid": "e6422d04-8f87-46b4-b6af-a3f6184e57ca", 00:15:19.998 "assigned_rate_limits": { 00:15:19.998 "rw_ios_per_sec": 0, 00:15:19.998 "rw_mbytes_per_sec": 0, 00:15:19.998 "r_mbytes_per_sec": 0, 00:15:19.998 "w_mbytes_per_sec": 0 00:15:19.998 }, 00:15:19.998 "claimed": false, 00:15:19.998 "zoned": false, 00:15:19.998 "supported_io_types": { 00:15:19.998 "read": true, 00:15:19.998 "write": true, 00:15:19.998 "unmap": false, 00:15:19.998 "flush": false, 00:15:19.998 "reset": true, 00:15:19.998 "nvme_admin": false, 00:15:19.998 "nvme_io": false, 00:15:19.998 "nvme_io_md": false, 00:15:19.998 "write_zeroes": true, 00:15:19.998 "zcopy": false, 00:15:19.998 "get_zone_info": false, 00:15:19.998 "zone_management": false, 00:15:19.998 "zone_append": false, 00:15:19.998 "compare": false, 00:15:19.998 "compare_and_write": false, 00:15:19.998 "abort": false, 00:15:19.998 "seek_hole": false, 00:15:19.998 "seek_data": false, 00:15:19.998 "copy": false, 00:15:19.998 "nvme_iov_md": false 00:15:19.998 }, 00:15:19.998 "memory_domains": [ 00:15:19.998 { 00:15:19.998 "dma_device_id": "system", 00:15:19.998 "dma_device_type": 1 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.998 "dma_device_type": 2 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "system", 00:15:19.998 "dma_device_type": 1 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.998 "dma_device_type": 2 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "system", 00:15:19.998 "dma_device_type": 1 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.998 "dma_device_type": 2 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "system", 00:15:19.998 "dma_device_type": 1 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.998 "dma_device_type": 2 00:15:19.998 } 00:15:19.998 ], 00:15:19.998 "driver_specific": { 00:15:19.998 "raid": { 00:15:19.998 "uuid": "e6422d04-8f87-46b4-b6af-a3f6184e57ca", 00:15:19.998 "strip_size_kb": 0, 00:15:19.998 "state": "online", 00:15:19.998 "raid_level": "raid1", 00:15:19.998 "superblock": false, 00:15:19.998 "num_base_bdevs": 4, 00:15:19.998 "num_base_bdevs_discovered": 4, 00:15:19.998 "num_base_bdevs_operational": 4, 00:15:19.998 "base_bdevs_list": [ 00:15:19.998 { 00:15:19.998 "name": "BaseBdev1", 00:15:19.998 "uuid": "218346dc-24a7-4c19-b252-77d3ed281ea2", 00:15:19.998 "is_configured": true, 00:15:19.998 "data_offset": 0, 00:15:19.998 "data_size": 65536 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "name": "BaseBdev2", 00:15:19.998 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:19.998 "is_configured": true, 00:15:19.998 "data_offset": 0, 00:15:19.998 "data_size": 65536 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "name": "BaseBdev3", 00:15:19.998 "uuid": "2331a5d2-8bee-4b55-96c7-a2c954eda3d1", 00:15:19.998 "is_configured": true, 00:15:19.998 "data_offset": 0, 00:15:19.998 "data_size": 65536 00:15:19.998 }, 00:15:19.998 { 00:15:19.998 "name": "BaseBdev4", 00:15:19.998 "uuid": "c4978566-b081-4e14-8287-2e048b481053", 00:15:19.998 "is_configured": true, 00:15:19.998 "data_offset": 0, 00:15:19.998 "data_size": 65536 00:15:19.998 } 00:15:19.998 ] 00:15:19.998 } 00:15:19.998 } 00:15:19.998 }' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:19.998 BaseBdev2 00:15:19.998 BaseBdev3 00:15:19.998 BaseBdev4' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.998 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.999 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.257 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.257 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.257 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.257 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 [2024-12-05 19:35:13.614340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.517 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.518 "name": "Existed_Raid", 00:15:20.518 "uuid": "e6422d04-8f87-46b4-b6af-a3f6184e57ca", 00:15:20.518 "strip_size_kb": 0, 00:15:20.518 "state": "online", 00:15:20.518 "raid_level": "raid1", 00:15:20.518 "superblock": false, 00:15:20.518 "num_base_bdevs": 4, 00:15:20.518 "num_base_bdevs_discovered": 3, 00:15:20.518 "num_base_bdevs_operational": 3, 00:15:20.518 "base_bdevs_list": [ 00:15:20.518 { 00:15:20.518 "name": null, 00:15:20.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.518 "is_configured": false, 00:15:20.518 "data_offset": 0, 00:15:20.518 "data_size": 65536 00:15:20.518 }, 00:15:20.518 { 00:15:20.518 "name": "BaseBdev2", 00:15:20.518 "uuid": "ec53913d-ac1d-497d-a8d5-f4e62a6982ca", 00:15:20.518 "is_configured": true, 00:15:20.518 "data_offset": 0, 00:15:20.518 "data_size": 65536 00:15:20.518 }, 00:15:20.518 { 00:15:20.518 "name": "BaseBdev3", 00:15:20.518 "uuid": "2331a5d2-8bee-4b55-96c7-a2c954eda3d1", 00:15:20.518 "is_configured": true, 00:15:20.518 "data_offset": 0, 00:15:20.518 "data_size": 65536 00:15:20.518 }, 00:15:20.518 { 00:15:20.518 "name": "BaseBdev4", 00:15:20.518 "uuid": "c4978566-b081-4e14-8287-2e048b481053", 00:15:20.518 "is_configured": true, 00:15:20.518 "data_offset": 0, 00:15:20.518 "data_size": 65536 00:15:20.518 } 00:15:20.518 ] 00:15:20.518 }' 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.518 19:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 [2024-12-05 19:35:14.290174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 [2024-12-05 19:35:14.431323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:21.085 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.344 [2024-12-05 19:35:14.579362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:21.344 [2024-12-05 19:35:14.579629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.344 [2024-12-05 19:35:14.665252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.344 [2024-12-05 19:35:14.665503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.344 [2024-12-05 19:35:14.665538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.344 BaseBdev2 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:21.344 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.345 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 [ 00:15:21.605 { 00:15:21.605 "name": "BaseBdev2", 00:15:21.605 "aliases": [ 00:15:21.605 "1fb4f411-bd02-4396-a105-0a4f2140c0bf" 00:15:21.605 ], 00:15:21.605 "product_name": "Malloc disk", 00:15:21.605 "block_size": 512, 00:15:21.605 "num_blocks": 65536, 00:15:21.605 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:21.605 "assigned_rate_limits": { 00:15:21.605 "rw_ios_per_sec": 0, 00:15:21.605 "rw_mbytes_per_sec": 0, 00:15:21.605 "r_mbytes_per_sec": 0, 00:15:21.605 "w_mbytes_per_sec": 0 00:15:21.605 }, 00:15:21.605 "claimed": false, 00:15:21.605 "zoned": false, 00:15:21.605 "supported_io_types": { 00:15:21.605 "read": true, 00:15:21.605 "write": true, 00:15:21.605 "unmap": true, 00:15:21.605 "flush": true, 00:15:21.605 "reset": true, 00:15:21.605 "nvme_admin": false, 00:15:21.605 "nvme_io": false, 00:15:21.605 "nvme_io_md": false, 00:15:21.605 "write_zeroes": true, 00:15:21.605 "zcopy": true, 00:15:21.605 "get_zone_info": false, 00:15:21.605 "zone_management": false, 00:15:21.605 "zone_append": false, 00:15:21.605 "compare": false, 00:15:21.605 "compare_and_write": false, 00:15:21.605 "abort": true, 00:15:21.605 "seek_hole": false, 00:15:21.605 "seek_data": false, 00:15:21.605 "copy": true, 00:15:21.605 "nvme_iov_md": false 00:15:21.605 }, 00:15:21.605 "memory_domains": [ 00:15:21.605 { 00:15:21.605 "dma_device_id": "system", 00:15:21.605 "dma_device_type": 1 00:15:21.605 }, 00:15:21.605 { 00:15:21.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.605 "dma_device_type": 2 00:15:21.605 } 00:15:21.605 ], 00:15:21.605 "driver_specific": {} 00:15:21.605 } 00:15:21.605 ] 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 BaseBdev3 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.605 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 [ 00:15:21.605 { 00:15:21.605 "name": "BaseBdev3", 00:15:21.605 "aliases": [ 00:15:21.605 "c77c085e-5351-41b8-b680-3a037760e70b" 00:15:21.605 ], 00:15:21.605 "product_name": "Malloc disk", 00:15:21.605 "block_size": 512, 00:15:21.605 "num_blocks": 65536, 00:15:21.605 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:21.605 "assigned_rate_limits": { 00:15:21.605 "rw_ios_per_sec": 0, 00:15:21.605 "rw_mbytes_per_sec": 0, 00:15:21.605 "r_mbytes_per_sec": 0, 00:15:21.605 "w_mbytes_per_sec": 0 00:15:21.605 }, 00:15:21.605 "claimed": false, 00:15:21.605 "zoned": false, 00:15:21.605 "supported_io_types": { 00:15:21.605 "read": true, 00:15:21.605 "write": true, 00:15:21.605 "unmap": true, 00:15:21.605 "flush": true, 00:15:21.605 "reset": true, 00:15:21.605 "nvme_admin": false, 00:15:21.605 "nvme_io": false, 00:15:21.605 "nvme_io_md": false, 00:15:21.605 "write_zeroes": true, 00:15:21.605 "zcopy": true, 00:15:21.605 "get_zone_info": false, 00:15:21.605 "zone_management": false, 00:15:21.605 "zone_append": false, 00:15:21.605 "compare": false, 00:15:21.605 "compare_and_write": false, 00:15:21.605 "abort": true, 00:15:21.605 "seek_hole": false, 00:15:21.605 "seek_data": false, 00:15:21.605 "copy": true, 00:15:21.605 "nvme_iov_md": false 00:15:21.605 }, 00:15:21.605 "memory_domains": [ 00:15:21.605 { 00:15:21.605 "dma_device_id": "system", 00:15:21.605 "dma_device_type": 1 00:15:21.605 }, 00:15:21.606 { 00:15:21.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.606 "dma_device_type": 2 00:15:21.606 } 00:15:21.606 ], 00:15:21.606 "driver_specific": {} 00:15:21.606 } 00:15:21.606 ] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.606 BaseBdev4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.606 [ 00:15:21.606 { 00:15:21.606 "name": "BaseBdev4", 00:15:21.606 "aliases": [ 00:15:21.606 "b69f3f56-2650-40f0-aaf3-299f0a69cdc7" 00:15:21.606 ], 00:15:21.606 "product_name": "Malloc disk", 00:15:21.606 "block_size": 512, 00:15:21.606 "num_blocks": 65536, 00:15:21.606 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:21.606 "assigned_rate_limits": { 00:15:21.606 "rw_ios_per_sec": 0, 00:15:21.606 "rw_mbytes_per_sec": 0, 00:15:21.606 "r_mbytes_per_sec": 0, 00:15:21.606 "w_mbytes_per_sec": 0 00:15:21.606 }, 00:15:21.606 "claimed": false, 00:15:21.606 "zoned": false, 00:15:21.606 "supported_io_types": { 00:15:21.606 "read": true, 00:15:21.606 "write": true, 00:15:21.606 "unmap": true, 00:15:21.606 "flush": true, 00:15:21.606 "reset": true, 00:15:21.606 "nvme_admin": false, 00:15:21.606 "nvme_io": false, 00:15:21.606 "nvme_io_md": false, 00:15:21.606 "write_zeroes": true, 00:15:21.606 "zcopy": true, 00:15:21.606 "get_zone_info": false, 00:15:21.606 "zone_management": false, 00:15:21.606 "zone_append": false, 00:15:21.606 "compare": false, 00:15:21.606 "compare_and_write": false, 00:15:21.606 "abort": true, 00:15:21.606 "seek_hole": false, 00:15:21.606 "seek_data": false, 00:15:21.606 "copy": true, 00:15:21.606 "nvme_iov_md": false 00:15:21.606 }, 00:15:21.606 "memory_domains": [ 00:15:21.606 { 00:15:21.606 "dma_device_id": "system", 00:15:21.606 "dma_device_type": 1 00:15:21.606 }, 00:15:21.606 { 00:15:21.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.606 "dma_device_type": 2 00:15:21.606 } 00:15:21.606 ], 00:15:21.606 "driver_specific": {} 00:15:21.606 } 00:15:21.606 ] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.606 [2024-12-05 19:35:14.950226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.606 [2024-12-05 19:35:14.950444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.606 [2024-12-05 19:35:14.950594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.606 [2024-12-05 19:35:14.953251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.606 [2024-12-05 19:35:14.953463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.606 19:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.606 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.606 "name": "Existed_Raid", 00:15:21.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.606 "strip_size_kb": 0, 00:15:21.606 "state": "configuring", 00:15:21.606 "raid_level": "raid1", 00:15:21.606 "superblock": false, 00:15:21.606 "num_base_bdevs": 4, 00:15:21.606 "num_base_bdevs_discovered": 3, 00:15:21.606 "num_base_bdevs_operational": 4, 00:15:21.606 "base_bdevs_list": [ 00:15:21.606 { 00:15:21.606 "name": "BaseBdev1", 00:15:21.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.606 "is_configured": false, 00:15:21.606 "data_offset": 0, 00:15:21.606 "data_size": 0 00:15:21.606 }, 00:15:21.606 { 00:15:21.606 "name": "BaseBdev2", 00:15:21.606 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:21.606 "is_configured": true, 00:15:21.606 "data_offset": 0, 00:15:21.606 "data_size": 65536 00:15:21.606 }, 00:15:21.606 { 00:15:21.606 "name": "BaseBdev3", 00:15:21.606 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:21.606 "is_configured": true, 00:15:21.606 "data_offset": 0, 00:15:21.606 "data_size": 65536 00:15:21.606 }, 00:15:21.606 { 00:15:21.606 "name": "BaseBdev4", 00:15:21.606 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:21.607 "is_configured": true, 00:15:21.607 "data_offset": 0, 00:15:21.607 "data_size": 65536 00:15:21.607 } 00:15:21.607 ] 00:15:21.607 }' 00:15:21.607 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.607 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.173 [2024-12-05 19:35:15.494467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.173 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.173 "name": "Existed_Raid", 00:15:22.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.173 "strip_size_kb": 0, 00:15:22.173 "state": "configuring", 00:15:22.173 "raid_level": "raid1", 00:15:22.173 "superblock": false, 00:15:22.173 "num_base_bdevs": 4, 00:15:22.173 "num_base_bdevs_discovered": 2, 00:15:22.173 "num_base_bdevs_operational": 4, 00:15:22.173 "base_bdevs_list": [ 00:15:22.173 { 00:15:22.173 "name": "BaseBdev1", 00:15:22.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.173 "is_configured": false, 00:15:22.173 "data_offset": 0, 00:15:22.173 "data_size": 0 00:15:22.173 }, 00:15:22.173 { 00:15:22.173 "name": null, 00:15:22.173 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:22.173 "is_configured": false, 00:15:22.173 "data_offset": 0, 00:15:22.173 "data_size": 65536 00:15:22.173 }, 00:15:22.173 { 00:15:22.173 "name": "BaseBdev3", 00:15:22.173 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:22.173 "is_configured": true, 00:15:22.173 "data_offset": 0, 00:15:22.173 "data_size": 65536 00:15:22.173 }, 00:15:22.173 { 00:15:22.173 "name": "BaseBdev4", 00:15:22.173 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:22.174 "is_configured": true, 00:15:22.174 "data_offset": 0, 00:15:22.174 "data_size": 65536 00:15:22.174 } 00:15:22.174 ] 00:15:22.174 }' 00:15:22.174 19:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.174 19:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 [2024-12-05 19:35:16.117154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.741 BaseBdev1 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 [ 00:15:22.741 { 00:15:22.741 "name": "BaseBdev1", 00:15:22.741 "aliases": [ 00:15:22.741 "82339b29-9a1e-4bcf-addb-e03b48698740" 00:15:22.741 ], 00:15:22.741 "product_name": "Malloc disk", 00:15:22.741 "block_size": 512, 00:15:22.741 "num_blocks": 65536, 00:15:22.741 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:22.741 "assigned_rate_limits": { 00:15:22.741 "rw_ios_per_sec": 0, 00:15:22.741 "rw_mbytes_per_sec": 0, 00:15:22.741 "r_mbytes_per_sec": 0, 00:15:22.741 "w_mbytes_per_sec": 0 00:15:22.741 }, 00:15:22.741 "claimed": true, 00:15:22.741 "claim_type": "exclusive_write", 00:15:22.741 "zoned": false, 00:15:22.741 "supported_io_types": { 00:15:22.741 "read": true, 00:15:22.741 "write": true, 00:15:22.741 "unmap": true, 00:15:22.741 "flush": true, 00:15:22.741 "reset": true, 00:15:22.741 "nvme_admin": false, 00:15:22.741 "nvme_io": false, 00:15:22.741 "nvme_io_md": false, 00:15:22.741 "write_zeroes": true, 00:15:22.741 "zcopy": true, 00:15:22.741 "get_zone_info": false, 00:15:22.741 "zone_management": false, 00:15:22.741 "zone_append": false, 00:15:22.741 "compare": false, 00:15:22.741 "compare_and_write": false, 00:15:22.741 "abort": true, 00:15:22.741 "seek_hole": false, 00:15:22.741 "seek_data": false, 00:15:22.741 "copy": true, 00:15:22.741 "nvme_iov_md": false 00:15:22.741 }, 00:15:22.741 "memory_domains": [ 00:15:22.741 { 00:15:22.741 "dma_device_id": "system", 00:15:22.741 "dma_device_type": 1 00:15:22.741 }, 00:15:22.741 { 00:15:22.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.741 "dma_device_type": 2 00:15:22.741 } 00:15:22.741 ], 00:15:22.741 "driver_specific": {} 00:15:22.741 } 00:15:22.741 ] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.741 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.000 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.000 "name": "Existed_Raid", 00:15:23.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.000 "strip_size_kb": 0, 00:15:23.000 "state": "configuring", 00:15:23.000 "raid_level": "raid1", 00:15:23.000 "superblock": false, 00:15:23.000 "num_base_bdevs": 4, 00:15:23.000 "num_base_bdevs_discovered": 3, 00:15:23.000 "num_base_bdevs_operational": 4, 00:15:23.000 "base_bdevs_list": [ 00:15:23.000 { 00:15:23.000 "name": "BaseBdev1", 00:15:23.000 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:23.000 "is_configured": true, 00:15:23.000 "data_offset": 0, 00:15:23.000 "data_size": 65536 00:15:23.000 }, 00:15:23.000 { 00:15:23.000 "name": null, 00:15:23.000 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:23.000 "is_configured": false, 00:15:23.000 "data_offset": 0, 00:15:23.000 "data_size": 65536 00:15:23.000 }, 00:15:23.000 { 00:15:23.000 "name": "BaseBdev3", 00:15:23.000 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:23.000 "is_configured": true, 00:15:23.000 "data_offset": 0, 00:15:23.000 "data_size": 65536 00:15:23.000 }, 00:15:23.000 { 00:15:23.000 "name": "BaseBdev4", 00:15:23.000 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:23.000 "is_configured": true, 00:15:23.000 "data_offset": 0, 00:15:23.000 "data_size": 65536 00:15:23.000 } 00:15:23.000 ] 00:15:23.000 }' 00:15:23.000 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.000 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.259 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.259 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:23.259 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.259 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.259 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.518 [2024-12-05 19:35:16.709427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.518 "name": "Existed_Raid", 00:15:23.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.518 "strip_size_kb": 0, 00:15:23.518 "state": "configuring", 00:15:23.518 "raid_level": "raid1", 00:15:23.518 "superblock": false, 00:15:23.518 "num_base_bdevs": 4, 00:15:23.518 "num_base_bdevs_discovered": 2, 00:15:23.518 "num_base_bdevs_operational": 4, 00:15:23.518 "base_bdevs_list": [ 00:15:23.518 { 00:15:23.518 "name": "BaseBdev1", 00:15:23.518 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:23.518 "is_configured": true, 00:15:23.518 "data_offset": 0, 00:15:23.518 "data_size": 65536 00:15:23.518 }, 00:15:23.518 { 00:15:23.518 "name": null, 00:15:23.518 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:23.518 "is_configured": false, 00:15:23.518 "data_offset": 0, 00:15:23.518 "data_size": 65536 00:15:23.518 }, 00:15:23.518 { 00:15:23.518 "name": null, 00:15:23.518 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:23.518 "is_configured": false, 00:15:23.518 "data_offset": 0, 00:15:23.518 "data_size": 65536 00:15:23.518 }, 00:15:23.518 { 00:15:23.518 "name": "BaseBdev4", 00:15:23.518 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:23.518 "is_configured": true, 00:15:23.518 "data_offset": 0, 00:15:23.518 "data_size": 65536 00:15:23.518 } 00:15:23.518 ] 00:15:23.518 }' 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.518 19:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:24.086 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.087 [2024-12-05 19:35:17.297597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.087 "name": "Existed_Raid", 00:15:24.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.087 "strip_size_kb": 0, 00:15:24.087 "state": "configuring", 00:15:24.087 "raid_level": "raid1", 00:15:24.087 "superblock": false, 00:15:24.087 "num_base_bdevs": 4, 00:15:24.087 "num_base_bdevs_discovered": 3, 00:15:24.087 "num_base_bdevs_operational": 4, 00:15:24.087 "base_bdevs_list": [ 00:15:24.087 { 00:15:24.087 "name": "BaseBdev1", 00:15:24.087 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:24.087 "is_configured": true, 00:15:24.087 "data_offset": 0, 00:15:24.087 "data_size": 65536 00:15:24.087 }, 00:15:24.087 { 00:15:24.087 "name": null, 00:15:24.087 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:24.087 "is_configured": false, 00:15:24.087 "data_offset": 0, 00:15:24.087 "data_size": 65536 00:15:24.087 }, 00:15:24.087 { 00:15:24.087 "name": "BaseBdev3", 00:15:24.087 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:24.087 "is_configured": true, 00:15:24.087 "data_offset": 0, 00:15:24.087 "data_size": 65536 00:15:24.087 }, 00:15:24.087 { 00:15:24.087 "name": "BaseBdev4", 00:15:24.087 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:24.087 "is_configured": true, 00:15:24.087 "data_offset": 0, 00:15:24.087 "data_size": 65536 00:15:24.087 } 00:15:24.087 ] 00:15:24.087 }' 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.087 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.655 [2024-12-05 19:35:17.897891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.655 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.656 19:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.656 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.656 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.656 "name": "Existed_Raid", 00:15:24.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.656 "strip_size_kb": 0, 00:15:24.656 "state": "configuring", 00:15:24.656 "raid_level": "raid1", 00:15:24.656 "superblock": false, 00:15:24.656 "num_base_bdevs": 4, 00:15:24.656 "num_base_bdevs_discovered": 2, 00:15:24.656 "num_base_bdevs_operational": 4, 00:15:24.656 "base_bdevs_list": [ 00:15:24.656 { 00:15:24.656 "name": null, 00:15:24.656 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:24.656 "is_configured": false, 00:15:24.656 "data_offset": 0, 00:15:24.656 "data_size": 65536 00:15:24.656 }, 00:15:24.656 { 00:15:24.656 "name": null, 00:15:24.656 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:24.656 "is_configured": false, 00:15:24.656 "data_offset": 0, 00:15:24.656 "data_size": 65536 00:15:24.656 }, 00:15:24.656 { 00:15:24.656 "name": "BaseBdev3", 00:15:24.656 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:24.656 "is_configured": true, 00:15:24.656 "data_offset": 0, 00:15:24.656 "data_size": 65536 00:15:24.656 }, 00:15:24.656 { 00:15:24.656 "name": "BaseBdev4", 00:15:24.656 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:24.656 "is_configured": true, 00:15:24.656 "data_offset": 0, 00:15:24.656 "data_size": 65536 00:15:24.656 } 00:15:24.656 ] 00:15:24.656 }' 00:15:24.656 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.656 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.226 [2024-12-05 19:35:18.543791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.226 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.226 "name": "Existed_Raid", 00:15:25.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.226 "strip_size_kb": 0, 00:15:25.226 "state": "configuring", 00:15:25.226 "raid_level": "raid1", 00:15:25.226 "superblock": false, 00:15:25.226 "num_base_bdevs": 4, 00:15:25.226 "num_base_bdevs_discovered": 3, 00:15:25.226 "num_base_bdevs_operational": 4, 00:15:25.226 "base_bdevs_list": [ 00:15:25.226 { 00:15:25.226 "name": null, 00:15:25.226 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:25.226 "is_configured": false, 00:15:25.226 "data_offset": 0, 00:15:25.226 "data_size": 65536 00:15:25.226 }, 00:15:25.226 { 00:15:25.227 "name": "BaseBdev2", 00:15:25.227 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:25.227 "is_configured": true, 00:15:25.227 "data_offset": 0, 00:15:25.227 "data_size": 65536 00:15:25.227 }, 00:15:25.227 { 00:15:25.227 "name": "BaseBdev3", 00:15:25.227 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:25.227 "is_configured": true, 00:15:25.227 "data_offset": 0, 00:15:25.227 "data_size": 65536 00:15:25.227 }, 00:15:25.227 { 00:15:25.227 "name": "BaseBdev4", 00:15:25.227 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:25.227 "is_configured": true, 00:15:25.227 "data_offset": 0, 00:15:25.227 "data_size": 65536 00:15:25.227 } 00:15:25.227 ] 00:15:25.227 }' 00:15:25.227 19:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.227 19:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82339b29-9a1e-4bcf-addb-e03b48698740 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 [2024-12-05 19:35:19.221999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:25.795 [2024-12-05 19:35:19.222303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:25.795 [2024-12-05 19:35:19.222338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:25.795 [2024-12-05 19:35:19.222657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:25.795 [2024-12-05 19:35:19.222953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:25.795 [2024-12-05 19:35:19.222970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:25.795 [2024-12-05 19:35:19.223308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.795 NewBaseBdev 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:25.795 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.061 [ 00:15:26.061 { 00:15:26.061 "name": "NewBaseBdev", 00:15:26.061 "aliases": [ 00:15:26.061 "82339b29-9a1e-4bcf-addb-e03b48698740" 00:15:26.061 ], 00:15:26.061 "product_name": "Malloc disk", 00:15:26.061 "block_size": 512, 00:15:26.061 "num_blocks": 65536, 00:15:26.061 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:26.061 "assigned_rate_limits": { 00:15:26.061 "rw_ios_per_sec": 0, 00:15:26.061 "rw_mbytes_per_sec": 0, 00:15:26.061 "r_mbytes_per_sec": 0, 00:15:26.061 "w_mbytes_per_sec": 0 00:15:26.061 }, 00:15:26.061 "claimed": true, 00:15:26.061 "claim_type": "exclusive_write", 00:15:26.061 "zoned": false, 00:15:26.061 "supported_io_types": { 00:15:26.061 "read": true, 00:15:26.061 "write": true, 00:15:26.061 "unmap": true, 00:15:26.061 "flush": true, 00:15:26.061 "reset": true, 00:15:26.061 "nvme_admin": false, 00:15:26.061 "nvme_io": false, 00:15:26.061 "nvme_io_md": false, 00:15:26.061 "write_zeroes": true, 00:15:26.061 "zcopy": true, 00:15:26.061 "get_zone_info": false, 00:15:26.061 "zone_management": false, 00:15:26.061 "zone_append": false, 00:15:26.061 "compare": false, 00:15:26.061 "compare_and_write": false, 00:15:26.061 "abort": true, 00:15:26.061 "seek_hole": false, 00:15:26.061 "seek_data": false, 00:15:26.061 "copy": true, 00:15:26.061 "nvme_iov_md": false 00:15:26.061 }, 00:15:26.061 "memory_domains": [ 00:15:26.061 { 00:15:26.061 "dma_device_id": "system", 00:15:26.061 "dma_device_type": 1 00:15:26.061 }, 00:15:26.061 { 00:15:26.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.061 "dma_device_type": 2 00:15:26.061 } 00:15:26.061 ], 00:15:26.061 "driver_specific": {} 00:15:26.061 } 00:15:26.061 ] 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.061 "name": "Existed_Raid", 00:15:26.061 "uuid": "b3f08767-ebf4-4a29-87f8-5c2d3a8af445", 00:15:26.061 "strip_size_kb": 0, 00:15:26.061 "state": "online", 00:15:26.061 "raid_level": "raid1", 00:15:26.061 "superblock": false, 00:15:26.061 "num_base_bdevs": 4, 00:15:26.061 "num_base_bdevs_discovered": 4, 00:15:26.061 "num_base_bdevs_operational": 4, 00:15:26.061 "base_bdevs_list": [ 00:15:26.061 { 00:15:26.061 "name": "NewBaseBdev", 00:15:26.061 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:26.061 "is_configured": true, 00:15:26.061 "data_offset": 0, 00:15:26.061 "data_size": 65536 00:15:26.061 }, 00:15:26.061 { 00:15:26.061 "name": "BaseBdev2", 00:15:26.061 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:26.061 "is_configured": true, 00:15:26.061 "data_offset": 0, 00:15:26.061 "data_size": 65536 00:15:26.061 }, 00:15:26.061 { 00:15:26.061 "name": "BaseBdev3", 00:15:26.061 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:26.061 "is_configured": true, 00:15:26.061 "data_offset": 0, 00:15:26.061 "data_size": 65536 00:15:26.061 }, 00:15:26.061 { 00:15:26.061 "name": "BaseBdev4", 00:15:26.061 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:26.061 "is_configured": true, 00:15:26.061 "data_offset": 0, 00:15:26.061 "data_size": 65536 00:15:26.061 } 00:15:26.061 ] 00:15:26.061 }' 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.061 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.640 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.641 [2024-12-05 19:35:19.798668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.641 "name": "Existed_Raid", 00:15:26.641 "aliases": [ 00:15:26.641 "b3f08767-ebf4-4a29-87f8-5c2d3a8af445" 00:15:26.641 ], 00:15:26.641 "product_name": "Raid Volume", 00:15:26.641 "block_size": 512, 00:15:26.641 "num_blocks": 65536, 00:15:26.641 "uuid": "b3f08767-ebf4-4a29-87f8-5c2d3a8af445", 00:15:26.641 "assigned_rate_limits": { 00:15:26.641 "rw_ios_per_sec": 0, 00:15:26.641 "rw_mbytes_per_sec": 0, 00:15:26.641 "r_mbytes_per_sec": 0, 00:15:26.641 "w_mbytes_per_sec": 0 00:15:26.641 }, 00:15:26.641 "claimed": false, 00:15:26.641 "zoned": false, 00:15:26.641 "supported_io_types": { 00:15:26.641 "read": true, 00:15:26.641 "write": true, 00:15:26.641 "unmap": false, 00:15:26.641 "flush": false, 00:15:26.641 "reset": true, 00:15:26.641 "nvme_admin": false, 00:15:26.641 "nvme_io": false, 00:15:26.641 "nvme_io_md": false, 00:15:26.641 "write_zeroes": true, 00:15:26.641 "zcopy": false, 00:15:26.641 "get_zone_info": false, 00:15:26.641 "zone_management": false, 00:15:26.641 "zone_append": false, 00:15:26.641 "compare": false, 00:15:26.641 "compare_and_write": false, 00:15:26.641 "abort": false, 00:15:26.641 "seek_hole": false, 00:15:26.641 "seek_data": false, 00:15:26.641 "copy": false, 00:15:26.641 "nvme_iov_md": false 00:15:26.641 }, 00:15:26.641 "memory_domains": [ 00:15:26.641 { 00:15:26.641 "dma_device_id": "system", 00:15:26.641 "dma_device_type": 1 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.641 "dma_device_type": 2 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "system", 00:15:26.641 "dma_device_type": 1 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.641 "dma_device_type": 2 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "system", 00:15:26.641 "dma_device_type": 1 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.641 "dma_device_type": 2 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "system", 00:15:26.641 "dma_device_type": 1 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.641 "dma_device_type": 2 00:15:26.641 } 00:15:26.641 ], 00:15:26.641 "driver_specific": { 00:15:26.641 "raid": { 00:15:26.641 "uuid": "b3f08767-ebf4-4a29-87f8-5c2d3a8af445", 00:15:26.641 "strip_size_kb": 0, 00:15:26.641 "state": "online", 00:15:26.641 "raid_level": "raid1", 00:15:26.641 "superblock": false, 00:15:26.641 "num_base_bdevs": 4, 00:15:26.641 "num_base_bdevs_discovered": 4, 00:15:26.641 "num_base_bdevs_operational": 4, 00:15:26.641 "base_bdevs_list": [ 00:15:26.641 { 00:15:26.641 "name": "NewBaseBdev", 00:15:26.641 "uuid": "82339b29-9a1e-4bcf-addb-e03b48698740", 00:15:26.641 "is_configured": true, 00:15:26.641 "data_offset": 0, 00:15:26.641 "data_size": 65536 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "name": "BaseBdev2", 00:15:26.641 "uuid": "1fb4f411-bd02-4396-a105-0a4f2140c0bf", 00:15:26.641 "is_configured": true, 00:15:26.641 "data_offset": 0, 00:15:26.641 "data_size": 65536 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "name": "BaseBdev3", 00:15:26.641 "uuid": "c77c085e-5351-41b8-b680-3a037760e70b", 00:15:26.641 "is_configured": true, 00:15:26.641 "data_offset": 0, 00:15:26.641 "data_size": 65536 00:15:26.641 }, 00:15:26.641 { 00:15:26.641 "name": "BaseBdev4", 00:15:26.641 "uuid": "b69f3f56-2650-40f0-aaf3-299f0a69cdc7", 00:15:26.641 "is_configured": true, 00:15:26.641 "data_offset": 0, 00:15:26.641 "data_size": 65536 00:15:26.641 } 00:15:26.641 ] 00:15:26.641 } 00:15:26.641 } 00:15:26.641 }' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:26.641 BaseBdev2 00:15:26.641 BaseBdev3 00:15:26.641 BaseBdev4' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.641 19:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.641 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.901 [2024-12-05 19:35:20.182348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.901 [2024-12-05 19:35:20.182544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.901 [2024-12-05 19:35:20.182788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.901 [2024-12-05 19:35:20.183278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.901 [2024-12-05 19:35:20.183462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73310 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73310 ']' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73310 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73310 00:15:26.901 killing process with pid 73310 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73310' 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73310 00:15:26.901 [2024-12-05 19:35:20.221435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.901 19:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73310 00:15:27.160 [2024-12-05 19:35:20.570177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:28.544 00:15:28.544 real 0m12.964s 00:15:28.544 user 0m21.463s 00:15:28.544 sys 0m1.872s 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.544 ************************************ 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 END TEST raid_state_function_test 00:15:28.544 ************************************ 00:15:28.544 19:35:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:28.544 19:35:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:28.544 19:35:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.544 19:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 ************************************ 00:15:28.544 START TEST raid_state_function_test_sb 00:15:28.544 ************************************ 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73998 00:15:28.544 Process raid pid: 73998 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73998' 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73998 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73998 ']' 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.544 19:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.544 [2024-12-05 19:35:21.835895] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:28.544 [2024-12-05 19:35:21.836082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.802 [2024-12-05 19:35:22.026086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.802 [2024-12-05 19:35:22.165289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.061 [2024-12-05 19:35:22.380790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.061 [2024-12-05 19:35:22.380854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.628 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.628 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:29.628 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.628 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.628 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.628 [2024-12-05 19:35:22.893078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.628 [2024-12-05 19:35:22.893158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.628 [2024-12-05 19:35:22.893176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.629 [2024-12-05 19:35:22.893191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.629 [2024-12-05 19:35:22.893201] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.629 [2024-12-05 19:35:22.893216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.629 [2024-12-05 19:35:22.893226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:29.629 [2024-12-05 19:35:22.893239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.629 "name": "Existed_Raid", 00:15:29.629 "uuid": "d7865956-2ada-44b7-b242-036acef2c8d0", 00:15:29.629 "strip_size_kb": 0, 00:15:29.629 "state": "configuring", 00:15:29.629 "raid_level": "raid1", 00:15:29.629 "superblock": true, 00:15:29.629 "num_base_bdevs": 4, 00:15:29.629 "num_base_bdevs_discovered": 0, 00:15:29.629 "num_base_bdevs_operational": 4, 00:15:29.629 "base_bdevs_list": [ 00:15:29.629 { 00:15:29.629 "name": "BaseBdev1", 00:15:29.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.629 "is_configured": false, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 0 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev2", 00:15:29.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.629 "is_configured": false, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 0 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev3", 00:15:29.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.629 "is_configured": false, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 0 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev4", 00:15:29.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.629 "is_configured": false, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 0 00:15:29.629 } 00:15:29.629 ] 00:15:29.629 }' 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.629 19:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 [2024-12-05 19:35:23.401123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.197 [2024-12-05 19:35:23.401187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 [2024-12-05 19:35:23.409096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.197 [2024-12-05 19:35:23.409146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.197 [2024-12-05 19:35:23.409161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.197 [2024-12-05 19:35:23.409176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.197 [2024-12-05 19:35:23.409185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.197 [2024-12-05 19:35:23.409199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.197 [2024-12-05 19:35:23.409208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:30.197 [2024-12-05 19:35:23.409222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 [2024-12-05 19:35:23.454875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.197 BaseBdev1 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.197 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.197 [ 00:15:30.197 { 00:15:30.197 "name": "BaseBdev1", 00:15:30.197 "aliases": [ 00:15:30.197 "212e8fd7-0d91-4a5a-aeae-0847bc981e7a" 00:15:30.197 ], 00:15:30.197 "product_name": "Malloc disk", 00:15:30.197 "block_size": 512, 00:15:30.197 "num_blocks": 65536, 00:15:30.197 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:30.197 "assigned_rate_limits": { 00:15:30.197 "rw_ios_per_sec": 0, 00:15:30.197 "rw_mbytes_per_sec": 0, 00:15:30.197 "r_mbytes_per_sec": 0, 00:15:30.197 "w_mbytes_per_sec": 0 00:15:30.197 }, 00:15:30.197 "claimed": true, 00:15:30.197 "claim_type": "exclusive_write", 00:15:30.197 "zoned": false, 00:15:30.197 "supported_io_types": { 00:15:30.197 "read": true, 00:15:30.197 "write": true, 00:15:30.197 "unmap": true, 00:15:30.197 "flush": true, 00:15:30.197 "reset": true, 00:15:30.197 "nvme_admin": false, 00:15:30.197 "nvme_io": false, 00:15:30.197 "nvme_io_md": false, 00:15:30.197 "write_zeroes": true, 00:15:30.197 "zcopy": true, 00:15:30.197 "get_zone_info": false, 00:15:30.197 "zone_management": false, 00:15:30.197 "zone_append": false, 00:15:30.197 "compare": false, 00:15:30.198 "compare_and_write": false, 00:15:30.198 "abort": true, 00:15:30.198 "seek_hole": false, 00:15:30.198 "seek_data": false, 00:15:30.198 "copy": true, 00:15:30.198 "nvme_iov_md": false 00:15:30.198 }, 00:15:30.198 "memory_domains": [ 00:15:30.198 { 00:15:30.198 "dma_device_id": "system", 00:15:30.198 "dma_device_type": 1 00:15:30.198 }, 00:15:30.198 { 00:15:30.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.198 "dma_device_type": 2 00:15:30.198 } 00:15:30.198 ], 00:15:30.198 "driver_specific": {} 00:15:30.198 } 00:15:30.198 ] 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.198 "name": "Existed_Raid", 00:15:30.198 "uuid": "32c8f3ce-25e7-4d5e-9b9b-0568bb9694bf", 00:15:30.198 "strip_size_kb": 0, 00:15:30.198 "state": "configuring", 00:15:30.198 "raid_level": "raid1", 00:15:30.198 "superblock": true, 00:15:30.198 "num_base_bdevs": 4, 00:15:30.198 "num_base_bdevs_discovered": 1, 00:15:30.198 "num_base_bdevs_operational": 4, 00:15:30.198 "base_bdevs_list": [ 00:15:30.198 { 00:15:30.198 "name": "BaseBdev1", 00:15:30.198 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:30.198 "is_configured": true, 00:15:30.198 "data_offset": 2048, 00:15:30.198 "data_size": 63488 00:15:30.198 }, 00:15:30.198 { 00:15:30.198 "name": "BaseBdev2", 00:15:30.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.198 "is_configured": false, 00:15:30.198 "data_offset": 0, 00:15:30.198 "data_size": 0 00:15:30.198 }, 00:15:30.198 { 00:15:30.198 "name": "BaseBdev3", 00:15:30.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.198 "is_configured": false, 00:15:30.198 "data_offset": 0, 00:15:30.198 "data_size": 0 00:15:30.198 }, 00:15:30.198 { 00:15:30.198 "name": "BaseBdev4", 00:15:30.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.198 "is_configured": false, 00:15:30.198 "data_offset": 0, 00:15:30.198 "data_size": 0 00:15:30.198 } 00:15:30.198 ] 00:15:30.198 }' 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.198 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.766 [2024-12-05 19:35:23.995073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.766 [2024-12-05 19:35:23.995141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.766 19:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.766 [2024-12-05 19:35:24.003109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.766 [2024-12-05 19:35:24.005507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.766 [2024-12-05 19:35:24.005562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.766 [2024-12-05 19:35:24.005579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.766 [2024-12-05 19:35:24.005603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.766 [2024-12-05 19:35:24.005613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:30.766 [2024-12-05 19:35:24.005626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:30.766 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.767 "name": "Existed_Raid", 00:15:30.767 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:30.767 "strip_size_kb": 0, 00:15:30.767 "state": "configuring", 00:15:30.767 "raid_level": "raid1", 00:15:30.767 "superblock": true, 00:15:30.767 "num_base_bdevs": 4, 00:15:30.767 "num_base_bdevs_discovered": 1, 00:15:30.767 "num_base_bdevs_operational": 4, 00:15:30.767 "base_bdevs_list": [ 00:15:30.767 { 00:15:30.767 "name": "BaseBdev1", 00:15:30.767 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:30.767 "is_configured": true, 00:15:30.767 "data_offset": 2048, 00:15:30.767 "data_size": 63488 00:15:30.767 }, 00:15:30.767 { 00:15:30.767 "name": "BaseBdev2", 00:15:30.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.767 "is_configured": false, 00:15:30.767 "data_offset": 0, 00:15:30.767 "data_size": 0 00:15:30.767 }, 00:15:30.767 { 00:15:30.767 "name": "BaseBdev3", 00:15:30.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.767 "is_configured": false, 00:15:30.767 "data_offset": 0, 00:15:30.767 "data_size": 0 00:15:30.767 }, 00:15:30.767 { 00:15:30.767 "name": "BaseBdev4", 00:15:30.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.767 "is_configured": false, 00:15:30.767 "data_offset": 0, 00:15:30.767 "data_size": 0 00:15:30.767 } 00:15:30.767 ] 00:15:30.767 }' 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.767 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.338 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.338 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.338 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.338 [2024-12-05 19:35:24.517932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.339 BaseBdev2 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.339 [ 00:15:31.339 { 00:15:31.339 "name": "BaseBdev2", 00:15:31.339 "aliases": [ 00:15:31.339 "c8cf135d-b451-443c-8850-7d310a9c4545" 00:15:31.339 ], 00:15:31.339 "product_name": "Malloc disk", 00:15:31.339 "block_size": 512, 00:15:31.339 "num_blocks": 65536, 00:15:31.339 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:31.339 "assigned_rate_limits": { 00:15:31.339 "rw_ios_per_sec": 0, 00:15:31.339 "rw_mbytes_per_sec": 0, 00:15:31.339 "r_mbytes_per_sec": 0, 00:15:31.339 "w_mbytes_per_sec": 0 00:15:31.339 }, 00:15:31.339 "claimed": true, 00:15:31.339 "claim_type": "exclusive_write", 00:15:31.339 "zoned": false, 00:15:31.339 "supported_io_types": { 00:15:31.339 "read": true, 00:15:31.339 "write": true, 00:15:31.339 "unmap": true, 00:15:31.339 "flush": true, 00:15:31.339 "reset": true, 00:15:31.339 "nvme_admin": false, 00:15:31.339 "nvme_io": false, 00:15:31.339 "nvme_io_md": false, 00:15:31.339 "write_zeroes": true, 00:15:31.339 "zcopy": true, 00:15:31.339 "get_zone_info": false, 00:15:31.339 "zone_management": false, 00:15:31.339 "zone_append": false, 00:15:31.339 "compare": false, 00:15:31.339 "compare_and_write": false, 00:15:31.339 "abort": true, 00:15:31.339 "seek_hole": false, 00:15:31.339 "seek_data": false, 00:15:31.339 "copy": true, 00:15:31.339 "nvme_iov_md": false 00:15:31.339 }, 00:15:31.339 "memory_domains": [ 00:15:31.339 { 00:15:31.339 "dma_device_id": "system", 00:15:31.339 "dma_device_type": 1 00:15:31.339 }, 00:15:31.339 { 00:15:31.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.339 "dma_device_type": 2 00:15:31.339 } 00:15:31.339 ], 00:15:31.339 "driver_specific": {} 00:15:31.339 } 00:15:31.339 ] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.339 "name": "Existed_Raid", 00:15:31.339 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:31.339 "strip_size_kb": 0, 00:15:31.339 "state": "configuring", 00:15:31.339 "raid_level": "raid1", 00:15:31.339 "superblock": true, 00:15:31.339 "num_base_bdevs": 4, 00:15:31.339 "num_base_bdevs_discovered": 2, 00:15:31.339 "num_base_bdevs_operational": 4, 00:15:31.339 "base_bdevs_list": [ 00:15:31.339 { 00:15:31.339 "name": "BaseBdev1", 00:15:31.339 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:31.339 "is_configured": true, 00:15:31.339 "data_offset": 2048, 00:15:31.339 "data_size": 63488 00:15:31.339 }, 00:15:31.339 { 00:15:31.339 "name": "BaseBdev2", 00:15:31.339 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:31.339 "is_configured": true, 00:15:31.339 "data_offset": 2048, 00:15:31.339 "data_size": 63488 00:15:31.339 }, 00:15:31.339 { 00:15:31.339 "name": "BaseBdev3", 00:15:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.339 "is_configured": false, 00:15:31.339 "data_offset": 0, 00:15:31.339 "data_size": 0 00:15:31.339 }, 00:15:31.339 { 00:15:31.339 "name": "BaseBdev4", 00:15:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.339 "is_configured": false, 00:15:31.339 "data_offset": 0, 00:15:31.339 "data_size": 0 00:15:31.339 } 00:15:31.339 ] 00:15:31.339 }' 00:15:31.339 19:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.340 19:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.610 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.610 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.610 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.872 [2024-12-05 19:35:25.100238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.872 BaseBdev3 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.872 [ 00:15:31.872 { 00:15:31.872 "name": "BaseBdev3", 00:15:31.872 "aliases": [ 00:15:31.872 "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b" 00:15:31.872 ], 00:15:31.872 "product_name": "Malloc disk", 00:15:31.872 "block_size": 512, 00:15:31.872 "num_blocks": 65536, 00:15:31.872 "uuid": "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b", 00:15:31.872 "assigned_rate_limits": { 00:15:31.872 "rw_ios_per_sec": 0, 00:15:31.872 "rw_mbytes_per_sec": 0, 00:15:31.872 "r_mbytes_per_sec": 0, 00:15:31.872 "w_mbytes_per_sec": 0 00:15:31.872 }, 00:15:31.872 "claimed": true, 00:15:31.872 "claim_type": "exclusive_write", 00:15:31.872 "zoned": false, 00:15:31.872 "supported_io_types": { 00:15:31.872 "read": true, 00:15:31.872 "write": true, 00:15:31.872 "unmap": true, 00:15:31.872 "flush": true, 00:15:31.872 "reset": true, 00:15:31.872 "nvme_admin": false, 00:15:31.872 "nvme_io": false, 00:15:31.872 "nvme_io_md": false, 00:15:31.872 "write_zeroes": true, 00:15:31.872 "zcopy": true, 00:15:31.872 "get_zone_info": false, 00:15:31.872 "zone_management": false, 00:15:31.872 "zone_append": false, 00:15:31.872 "compare": false, 00:15:31.872 "compare_and_write": false, 00:15:31.872 "abort": true, 00:15:31.872 "seek_hole": false, 00:15:31.872 "seek_data": false, 00:15:31.872 "copy": true, 00:15:31.872 "nvme_iov_md": false 00:15:31.872 }, 00:15:31.872 "memory_domains": [ 00:15:31.872 { 00:15:31.872 "dma_device_id": "system", 00:15:31.872 "dma_device_type": 1 00:15:31.872 }, 00:15:31.872 { 00:15:31.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.872 "dma_device_type": 2 00:15:31.872 } 00:15:31.872 ], 00:15:31.872 "driver_specific": {} 00:15:31.872 } 00:15:31.872 ] 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.872 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.873 "name": "Existed_Raid", 00:15:31.873 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:31.873 "strip_size_kb": 0, 00:15:31.873 "state": "configuring", 00:15:31.873 "raid_level": "raid1", 00:15:31.873 "superblock": true, 00:15:31.873 "num_base_bdevs": 4, 00:15:31.873 "num_base_bdevs_discovered": 3, 00:15:31.873 "num_base_bdevs_operational": 4, 00:15:31.873 "base_bdevs_list": [ 00:15:31.873 { 00:15:31.873 "name": "BaseBdev1", 00:15:31.873 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:31.873 "is_configured": true, 00:15:31.873 "data_offset": 2048, 00:15:31.873 "data_size": 63488 00:15:31.873 }, 00:15:31.873 { 00:15:31.873 "name": "BaseBdev2", 00:15:31.873 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:31.873 "is_configured": true, 00:15:31.873 "data_offset": 2048, 00:15:31.873 "data_size": 63488 00:15:31.873 }, 00:15:31.873 { 00:15:31.873 "name": "BaseBdev3", 00:15:31.873 "uuid": "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b", 00:15:31.873 "is_configured": true, 00:15:31.873 "data_offset": 2048, 00:15:31.873 "data_size": 63488 00:15:31.873 }, 00:15:31.873 { 00:15:31.873 "name": "BaseBdev4", 00:15:31.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.873 "is_configured": false, 00:15:31.873 "data_offset": 0, 00:15:31.873 "data_size": 0 00:15:31.873 } 00:15:31.873 ] 00:15:31.873 }' 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.873 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 [2024-12-05 19:35:25.679939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.440 [2024-12-05 19:35:25.680327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:32.440 [2024-12-05 19:35:25.680347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.440 BaseBdev4 00:15:32.440 [2024-12-05 19:35:25.680685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:32.440 [2024-12-05 19:35:25.680918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:32.440 [2024-12-05 19:35:25.680947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:32.440 [2024-12-05 19:35:25.681124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 [ 00:15:32.440 { 00:15:32.440 "name": "BaseBdev4", 00:15:32.440 "aliases": [ 00:15:32.440 "aa91c790-2e04-4045-9342-a5a2d8a25628" 00:15:32.440 ], 00:15:32.440 "product_name": "Malloc disk", 00:15:32.440 "block_size": 512, 00:15:32.440 "num_blocks": 65536, 00:15:32.440 "uuid": "aa91c790-2e04-4045-9342-a5a2d8a25628", 00:15:32.440 "assigned_rate_limits": { 00:15:32.440 "rw_ios_per_sec": 0, 00:15:32.440 "rw_mbytes_per_sec": 0, 00:15:32.440 "r_mbytes_per_sec": 0, 00:15:32.440 "w_mbytes_per_sec": 0 00:15:32.440 }, 00:15:32.440 "claimed": true, 00:15:32.440 "claim_type": "exclusive_write", 00:15:32.440 "zoned": false, 00:15:32.440 "supported_io_types": { 00:15:32.440 "read": true, 00:15:32.440 "write": true, 00:15:32.440 "unmap": true, 00:15:32.440 "flush": true, 00:15:32.440 "reset": true, 00:15:32.440 "nvme_admin": false, 00:15:32.440 "nvme_io": false, 00:15:32.440 "nvme_io_md": false, 00:15:32.440 "write_zeroes": true, 00:15:32.440 "zcopy": true, 00:15:32.440 "get_zone_info": false, 00:15:32.440 "zone_management": false, 00:15:32.440 "zone_append": false, 00:15:32.440 "compare": false, 00:15:32.440 "compare_and_write": false, 00:15:32.440 "abort": true, 00:15:32.440 "seek_hole": false, 00:15:32.440 "seek_data": false, 00:15:32.440 "copy": true, 00:15:32.440 "nvme_iov_md": false 00:15:32.440 }, 00:15:32.440 "memory_domains": [ 00:15:32.440 { 00:15:32.440 "dma_device_id": "system", 00:15:32.440 "dma_device_type": 1 00:15:32.440 }, 00:15:32.440 { 00:15:32.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.440 "dma_device_type": 2 00:15:32.440 } 00:15:32.440 ], 00:15:32.440 "driver_specific": {} 00:15:32.440 } 00:15:32.440 ] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.440 "name": "Existed_Raid", 00:15:32.440 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:32.440 "strip_size_kb": 0, 00:15:32.440 "state": "online", 00:15:32.440 "raid_level": "raid1", 00:15:32.440 "superblock": true, 00:15:32.440 "num_base_bdevs": 4, 00:15:32.440 "num_base_bdevs_discovered": 4, 00:15:32.440 "num_base_bdevs_operational": 4, 00:15:32.440 "base_bdevs_list": [ 00:15:32.440 { 00:15:32.440 "name": "BaseBdev1", 00:15:32.440 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:32.440 "is_configured": true, 00:15:32.440 "data_offset": 2048, 00:15:32.440 "data_size": 63488 00:15:32.440 }, 00:15:32.440 { 00:15:32.440 "name": "BaseBdev2", 00:15:32.440 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:32.440 "is_configured": true, 00:15:32.440 "data_offset": 2048, 00:15:32.440 "data_size": 63488 00:15:32.440 }, 00:15:32.440 { 00:15:32.440 "name": "BaseBdev3", 00:15:32.440 "uuid": "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b", 00:15:32.440 "is_configured": true, 00:15:32.440 "data_offset": 2048, 00:15:32.440 "data_size": 63488 00:15:32.440 }, 00:15:32.440 { 00:15:32.440 "name": "BaseBdev4", 00:15:32.440 "uuid": "aa91c790-2e04-4045-9342-a5a2d8a25628", 00:15:32.440 "is_configured": true, 00:15:32.440 "data_offset": 2048, 00:15:32.440 "data_size": 63488 00:15:32.440 } 00:15:32.440 ] 00:15:32.440 }' 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.440 19:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.007 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 [2024-12-05 19:35:26.204571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.008 "name": "Existed_Raid", 00:15:33.008 "aliases": [ 00:15:33.008 "f0746172-9af1-42d0-9935-11861e92813a" 00:15:33.008 ], 00:15:33.008 "product_name": "Raid Volume", 00:15:33.008 "block_size": 512, 00:15:33.008 "num_blocks": 63488, 00:15:33.008 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:33.008 "assigned_rate_limits": { 00:15:33.008 "rw_ios_per_sec": 0, 00:15:33.008 "rw_mbytes_per_sec": 0, 00:15:33.008 "r_mbytes_per_sec": 0, 00:15:33.008 "w_mbytes_per_sec": 0 00:15:33.008 }, 00:15:33.008 "claimed": false, 00:15:33.008 "zoned": false, 00:15:33.008 "supported_io_types": { 00:15:33.008 "read": true, 00:15:33.008 "write": true, 00:15:33.008 "unmap": false, 00:15:33.008 "flush": false, 00:15:33.008 "reset": true, 00:15:33.008 "nvme_admin": false, 00:15:33.008 "nvme_io": false, 00:15:33.008 "nvme_io_md": false, 00:15:33.008 "write_zeroes": true, 00:15:33.008 "zcopy": false, 00:15:33.008 "get_zone_info": false, 00:15:33.008 "zone_management": false, 00:15:33.008 "zone_append": false, 00:15:33.008 "compare": false, 00:15:33.008 "compare_and_write": false, 00:15:33.008 "abort": false, 00:15:33.008 "seek_hole": false, 00:15:33.008 "seek_data": false, 00:15:33.008 "copy": false, 00:15:33.008 "nvme_iov_md": false 00:15:33.008 }, 00:15:33.008 "memory_domains": [ 00:15:33.008 { 00:15:33.008 "dma_device_id": "system", 00:15:33.008 "dma_device_type": 1 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.008 "dma_device_type": 2 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "system", 00:15:33.008 "dma_device_type": 1 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.008 "dma_device_type": 2 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "system", 00:15:33.008 "dma_device_type": 1 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.008 "dma_device_type": 2 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "system", 00:15:33.008 "dma_device_type": 1 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.008 "dma_device_type": 2 00:15:33.008 } 00:15:33.008 ], 00:15:33.008 "driver_specific": { 00:15:33.008 "raid": { 00:15:33.008 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:33.008 "strip_size_kb": 0, 00:15:33.008 "state": "online", 00:15:33.008 "raid_level": "raid1", 00:15:33.008 "superblock": true, 00:15:33.008 "num_base_bdevs": 4, 00:15:33.008 "num_base_bdevs_discovered": 4, 00:15:33.008 "num_base_bdevs_operational": 4, 00:15:33.008 "base_bdevs_list": [ 00:15:33.008 { 00:15:33.008 "name": "BaseBdev1", 00:15:33.008 "uuid": "212e8fd7-0d91-4a5a-aeae-0847bc981e7a", 00:15:33.008 "is_configured": true, 00:15:33.008 "data_offset": 2048, 00:15:33.008 "data_size": 63488 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "name": "BaseBdev2", 00:15:33.008 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:33.008 "is_configured": true, 00:15:33.008 "data_offset": 2048, 00:15:33.008 "data_size": 63488 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "name": "BaseBdev3", 00:15:33.008 "uuid": "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b", 00:15:33.008 "is_configured": true, 00:15:33.008 "data_offset": 2048, 00:15:33.008 "data_size": 63488 00:15:33.008 }, 00:15:33.008 { 00:15:33.008 "name": "BaseBdev4", 00:15:33.008 "uuid": "aa91c790-2e04-4045-9342-a5a2d8a25628", 00:15:33.008 "is_configured": true, 00:15:33.008 "data_offset": 2048, 00:15:33.008 "data_size": 63488 00:15:33.008 } 00:15:33.008 ] 00:15:33.008 } 00:15:33.008 } 00:15:33.008 }' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:33.008 BaseBdev2 00:15:33.008 BaseBdev3 00:15:33.008 BaseBdev4' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.008 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.267 [2024-12-05 19:35:26.580375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.267 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.526 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.526 "name": "Existed_Raid", 00:15:33.526 "uuid": "f0746172-9af1-42d0-9935-11861e92813a", 00:15:33.526 "strip_size_kb": 0, 00:15:33.526 "state": "online", 00:15:33.526 "raid_level": "raid1", 00:15:33.526 "superblock": true, 00:15:33.526 "num_base_bdevs": 4, 00:15:33.526 "num_base_bdevs_discovered": 3, 00:15:33.526 "num_base_bdevs_operational": 3, 00:15:33.526 "base_bdevs_list": [ 00:15:33.526 { 00:15:33.526 "name": null, 00:15:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.526 "is_configured": false, 00:15:33.526 "data_offset": 0, 00:15:33.526 "data_size": 63488 00:15:33.526 }, 00:15:33.526 { 00:15:33.526 "name": "BaseBdev2", 00:15:33.526 "uuid": "c8cf135d-b451-443c-8850-7d310a9c4545", 00:15:33.526 "is_configured": true, 00:15:33.526 "data_offset": 2048, 00:15:33.526 "data_size": 63488 00:15:33.526 }, 00:15:33.526 { 00:15:33.526 "name": "BaseBdev3", 00:15:33.526 "uuid": "d8a6ad29-c174-4a1d-9d9a-e3f76839f81b", 00:15:33.526 "is_configured": true, 00:15:33.526 "data_offset": 2048, 00:15:33.526 "data_size": 63488 00:15:33.526 }, 00:15:33.526 { 00:15:33.526 "name": "BaseBdev4", 00:15:33.526 "uuid": "aa91c790-2e04-4045-9342-a5a2d8a25628", 00:15:33.526 "is_configured": true, 00:15:33.526 "data_offset": 2048, 00:15:33.526 "data_size": 63488 00:15:33.526 } 00:15:33.526 ] 00:15:33.526 }' 00:15:33.526 19:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.526 19:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.785 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 [2024-12-05 19:35:27.256954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.045 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.045 [2024-12-05 19:35:27.411872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.304 [2024-12-05 19:35:27.552132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:34.304 [2024-12-05 19:35:27.552307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.304 [2024-12-05 19:35:27.636861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.304 [2024-12-05 19:35:27.636933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.304 [2024-12-05 19:35:27.636954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.304 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.305 BaseBdev2 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.305 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.564 [ 00:15:34.564 { 00:15:34.564 "name": "BaseBdev2", 00:15:34.564 "aliases": [ 00:15:34.564 "be1c039e-4a71-409b-a993-8f30885e88ed" 00:15:34.564 ], 00:15:34.564 "product_name": "Malloc disk", 00:15:34.564 "block_size": 512, 00:15:34.564 "num_blocks": 65536, 00:15:34.564 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:34.564 "assigned_rate_limits": { 00:15:34.564 "rw_ios_per_sec": 0, 00:15:34.564 "rw_mbytes_per_sec": 0, 00:15:34.564 "r_mbytes_per_sec": 0, 00:15:34.564 "w_mbytes_per_sec": 0 00:15:34.564 }, 00:15:34.564 "claimed": false, 00:15:34.564 "zoned": false, 00:15:34.564 "supported_io_types": { 00:15:34.564 "read": true, 00:15:34.564 "write": true, 00:15:34.564 "unmap": true, 00:15:34.564 "flush": true, 00:15:34.564 "reset": true, 00:15:34.564 "nvme_admin": false, 00:15:34.564 "nvme_io": false, 00:15:34.564 "nvme_io_md": false, 00:15:34.564 "write_zeroes": true, 00:15:34.564 "zcopy": true, 00:15:34.564 "get_zone_info": false, 00:15:34.564 "zone_management": false, 00:15:34.564 "zone_append": false, 00:15:34.564 "compare": false, 00:15:34.564 "compare_and_write": false, 00:15:34.564 "abort": true, 00:15:34.564 "seek_hole": false, 00:15:34.564 "seek_data": false, 00:15:34.564 "copy": true, 00:15:34.564 "nvme_iov_md": false 00:15:34.564 }, 00:15:34.564 "memory_domains": [ 00:15:34.564 { 00:15:34.564 "dma_device_id": "system", 00:15:34.564 "dma_device_type": 1 00:15:34.564 }, 00:15:34.564 { 00:15:34.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.564 "dma_device_type": 2 00:15:34.564 } 00:15:34.564 ], 00:15:34.564 "driver_specific": {} 00:15:34.564 } 00:15:34.564 ] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.564 BaseBdev3 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.564 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.564 [ 00:15:34.564 { 00:15:34.564 "name": "BaseBdev3", 00:15:34.564 "aliases": [ 00:15:34.564 "3600834f-8af2-4e84-a6cb-c11c7a68eb22" 00:15:34.564 ], 00:15:34.564 "product_name": "Malloc disk", 00:15:34.564 "block_size": 512, 00:15:34.565 "num_blocks": 65536, 00:15:34.565 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:34.565 "assigned_rate_limits": { 00:15:34.565 "rw_ios_per_sec": 0, 00:15:34.565 "rw_mbytes_per_sec": 0, 00:15:34.565 "r_mbytes_per_sec": 0, 00:15:34.565 "w_mbytes_per_sec": 0 00:15:34.565 }, 00:15:34.565 "claimed": false, 00:15:34.565 "zoned": false, 00:15:34.565 "supported_io_types": { 00:15:34.565 "read": true, 00:15:34.565 "write": true, 00:15:34.565 "unmap": true, 00:15:34.565 "flush": true, 00:15:34.565 "reset": true, 00:15:34.565 "nvme_admin": false, 00:15:34.565 "nvme_io": false, 00:15:34.565 "nvme_io_md": false, 00:15:34.565 "write_zeroes": true, 00:15:34.565 "zcopy": true, 00:15:34.565 "get_zone_info": false, 00:15:34.565 "zone_management": false, 00:15:34.565 "zone_append": false, 00:15:34.565 "compare": false, 00:15:34.565 "compare_and_write": false, 00:15:34.565 "abort": true, 00:15:34.565 "seek_hole": false, 00:15:34.565 "seek_data": false, 00:15:34.565 "copy": true, 00:15:34.565 "nvme_iov_md": false 00:15:34.565 }, 00:15:34.565 "memory_domains": [ 00:15:34.565 { 00:15:34.565 "dma_device_id": "system", 00:15:34.565 "dma_device_type": 1 00:15:34.565 }, 00:15:34.565 { 00:15:34.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.565 "dma_device_type": 2 00:15:34.565 } 00:15:34.565 ], 00:15:34.565 "driver_specific": {} 00:15:34.565 } 00:15:34.565 ] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 BaseBdev4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 [ 00:15:34.565 { 00:15:34.565 "name": "BaseBdev4", 00:15:34.565 "aliases": [ 00:15:34.565 "0d5bf8c5-7876-4920-96a6-20d873be73cc" 00:15:34.565 ], 00:15:34.565 "product_name": "Malloc disk", 00:15:34.565 "block_size": 512, 00:15:34.565 "num_blocks": 65536, 00:15:34.565 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:34.565 "assigned_rate_limits": { 00:15:34.565 "rw_ios_per_sec": 0, 00:15:34.565 "rw_mbytes_per_sec": 0, 00:15:34.565 "r_mbytes_per_sec": 0, 00:15:34.565 "w_mbytes_per_sec": 0 00:15:34.565 }, 00:15:34.565 "claimed": false, 00:15:34.565 "zoned": false, 00:15:34.565 "supported_io_types": { 00:15:34.565 "read": true, 00:15:34.565 "write": true, 00:15:34.565 "unmap": true, 00:15:34.565 "flush": true, 00:15:34.565 "reset": true, 00:15:34.565 "nvme_admin": false, 00:15:34.565 "nvme_io": false, 00:15:34.565 "nvme_io_md": false, 00:15:34.565 "write_zeroes": true, 00:15:34.565 "zcopy": true, 00:15:34.565 "get_zone_info": false, 00:15:34.565 "zone_management": false, 00:15:34.565 "zone_append": false, 00:15:34.565 "compare": false, 00:15:34.565 "compare_and_write": false, 00:15:34.565 "abort": true, 00:15:34.565 "seek_hole": false, 00:15:34.565 "seek_data": false, 00:15:34.565 "copy": true, 00:15:34.565 "nvme_iov_md": false 00:15:34.565 }, 00:15:34.565 "memory_domains": [ 00:15:34.565 { 00:15:34.565 "dma_device_id": "system", 00:15:34.565 "dma_device_type": 1 00:15:34.565 }, 00:15:34.565 { 00:15:34.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.565 "dma_device_type": 2 00:15:34.565 } 00:15:34.565 ], 00:15:34.565 "driver_specific": {} 00:15:34.565 } 00:15:34.565 ] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 [2024-12-05 19:35:27.912779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.565 [2024-12-05 19:35:27.912837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.565 [2024-12-05 19:35:27.912865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.565 [2024-12-05 19:35:27.915353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.565 [2024-12-05 19:35:27.915422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.565 "name": "Existed_Raid", 00:15:34.565 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:34.565 "strip_size_kb": 0, 00:15:34.565 "state": "configuring", 00:15:34.565 "raid_level": "raid1", 00:15:34.565 "superblock": true, 00:15:34.565 "num_base_bdevs": 4, 00:15:34.565 "num_base_bdevs_discovered": 3, 00:15:34.565 "num_base_bdevs_operational": 4, 00:15:34.565 "base_bdevs_list": [ 00:15:34.565 { 00:15:34.565 "name": "BaseBdev1", 00:15:34.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.565 "is_configured": false, 00:15:34.565 "data_offset": 0, 00:15:34.565 "data_size": 0 00:15:34.565 }, 00:15:34.565 { 00:15:34.565 "name": "BaseBdev2", 00:15:34.565 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:34.565 "is_configured": true, 00:15:34.565 "data_offset": 2048, 00:15:34.565 "data_size": 63488 00:15:34.565 }, 00:15:34.565 { 00:15:34.565 "name": "BaseBdev3", 00:15:34.565 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:34.565 "is_configured": true, 00:15:34.565 "data_offset": 2048, 00:15:34.565 "data_size": 63488 00:15:34.565 }, 00:15:34.565 { 00:15:34.565 "name": "BaseBdev4", 00:15:34.565 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:34.565 "is_configured": true, 00:15:34.565 "data_offset": 2048, 00:15:34.565 "data_size": 63488 00:15:34.565 } 00:15:34.565 ] 00:15:34.565 }' 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.565 19:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 [2024-12-05 19:35:28.444969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.134 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.134 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.134 "name": "Existed_Raid", 00:15:35.134 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:35.134 "strip_size_kb": 0, 00:15:35.134 "state": "configuring", 00:15:35.134 "raid_level": "raid1", 00:15:35.134 "superblock": true, 00:15:35.134 "num_base_bdevs": 4, 00:15:35.134 "num_base_bdevs_discovered": 2, 00:15:35.134 "num_base_bdevs_operational": 4, 00:15:35.134 "base_bdevs_list": [ 00:15:35.134 { 00:15:35.134 "name": "BaseBdev1", 00:15:35.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.134 "is_configured": false, 00:15:35.134 "data_offset": 0, 00:15:35.134 "data_size": 0 00:15:35.134 }, 00:15:35.134 { 00:15:35.134 "name": null, 00:15:35.134 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:35.134 "is_configured": false, 00:15:35.134 "data_offset": 0, 00:15:35.134 "data_size": 63488 00:15:35.134 }, 00:15:35.134 { 00:15:35.134 "name": "BaseBdev3", 00:15:35.134 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:35.134 "is_configured": true, 00:15:35.134 "data_offset": 2048, 00:15:35.134 "data_size": 63488 00:15:35.134 }, 00:15:35.134 { 00:15:35.134 "name": "BaseBdev4", 00:15:35.134 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:35.134 "is_configured": true, 00:15:35.134 "data_offset": 2048, 00:15:35.134 "data_size": 63488 00:15:35.134 } 00:15:35.134 ] 00:15:35.134 }' 00:15:35.134 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.134 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.700 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.700 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 19:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:35.700 19:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 [2024-12-05 19:35:29.075348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.700 BaseBdev1 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.700 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 [ 00:15:35.700 { 00:15:35.700 "name": "BaseBdev1", 00:15:35.700 "aliases": [ 00:15:35.700 "0a2bcaba-f079-48c4-8986-82afe4a7bc4e" 00:15:35.700 ], 00:15:35.700 "product_name": "Malloc disk", 00:15:35.700 "block_size": 512, 00:15:35.700 "num_blocks": 65536, 00:15:35.700 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:35.700 "assigned_rate_limits": { 00:15:35.700 "rw_ios_per_sec": 0, 00:15:35.700 "rw_mbytes_per_sec": 0, 00:15:35.700 "r_mbytes_per_sec": 0, 00:15:35.700 "w_mbytes_per_sec": 0 00:15:35.700 }, 00:15:35.700 "claimed": true, 00:15:35.700 "claim_type": "exclusive_write", 00:15:35.700 "zoned": false, 00:15:35.700 "supported_io_types": { 00:15:35.700 "read": true, 00:15:35.700 "write": true, 00:15:35.700 "unmap": true, 00:15:35.700 "flush": true, 00:15:35.700 "reset": true, 00:15:35.700 "nvme_admin": false, 00:15:35.701 "nvme_io": false, 00:15:35.701 "nvme_io_md": false, 00:15:35.701 "write_zeroes": true, 00:15:35.701 "zcopy": true, 00:15:35.701 "get_zone_info": false, 00:15:35.701 "zone_management": false, 00:15:35.701 "zone_append": false, 00:15:35.701 "compare": false, 00:15:35.701 "compare_and_write": false, 00:15:35.701 "abort": true, 00:15:35.701 "seek_hole": false, 00:15:35.701 "seek_data": false, 00:15:35.701 "copy": true, 00:15:35.701 "nvme_iov_md": false 00:15:35.701 }, 00:15:35.701 "memory_domains": [ 00:15:35.701 { 00:15:35.701 "dma_device_id": "system", 00:15:35.701 "dma_device_type": 1 00:15:35.701 }, 00:15:35.701 { 00:15:35.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.701 "dma_device_type": 2 00:15:35.701 } 00:15:35.701 ], 00:15:35.701 "driver_specific": {} 00:15:35.701 } 00:15:35.701 ] 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.701 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.959 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.959 "name": "Existed_Raid", 00:15:35.959 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:35.959 "strip_size_kb": 0, 00:15:35.959 "state": "configuring", 00:15:35.959 "raid_level": "raid1", 00:15:35.959 "superblock": true, 00:15:35.959 "num_base_bdevs": 4, 00:15:35.959 "num_base_bdevs_discovered": 3, 00:15:35.959 "num_base_bdevs_operational": 4, 00:15:35.959 "base_bdevs_list": [ 00:15:35.959 { 00:15:35.959 "name": "BaseBdev1", 00:15:35.959 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:35.959 "is_configured": true, 00:15:35.959 "data_offset": 2048, 00:15:35.959 "data_size": 63488 00:15:35.959 }, 00:15:35.959 { 00:15:35.959 "name": null, 00:15:35.959 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:35.959 "is_configured": false, 00:15:35.959 "data_offset": 0, 00:15:35.959 "data_size": 63488 00:15:35.959 }, 00:15:35.959 { 00:15:35.959 "name": "BaseBdev3", 00:15:35.959 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:35.959 "is_configured": true, 00:15:35.959 "data_offset": 2048, 00:15:35.959 "data_size": 63488 00:15:35.959 }, 00:15:35.959 { 00:15:35.959 "name": "BaseBdev4", 00:15:35.959 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:35.960 "is_configured": true, 00:15:35.960 "data_offset": 2048, 00:15:35.960 "data_size": 63488 00:15:35.960 } 00:15:35.960 ] 00:15:35.960 }' 00:15:35.960 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.960 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.218 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.487 [2024-12-05 19:35:29.659616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.487 "name": "Existed_Raid", 00:15:36.487 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:36.487 "strip_size_kb": 0, 00:15:36.487 "state": "configuring", 00:15:36.487 "raid_level": "raid1", 00:15:36.487 "superblock": true, 00:15:36.487 "num_base_bdevs": 4, 00:15:36.487 "num_base_bdevs_discovered": 2, 00:15:36.487 "num_base_bdevs_operational": 4, 00:15:36.487 "base_bdevs_list": [ 00:15:36.487 { 00:15:36.487 "name": "BaseBdev1", 00:15:36.487 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:36.487 "is_configured": true, 00:15:36.487 "data_offset": 2048, 00:15:36.487 "data_size": 63488 00:15:36.487 }, 00:15:36.487 { 00:15:36.487 "name": null, 00:15:36.487 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:36.487 "is_configured": false, 00:15:36.487 "data_offset": 0, 00:15:36.487 "data_size": 63488 00:15:36.487 }, 00:15:36.487 { 00:15:36.487 "name": null, 00:15:36.487 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:36.487 "is_configured": false, 00:15:36.487 "data_offset": 0, 00:15:36.487 "data_size": 63488 00:15:36.487 }, 00:15:36.487 { 00:15:36.487 "name": "BaseBdev4", 00:15:36.487 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:36.487 "is_configured": true, 00:15:36.487 "data_offset": 2048, 00:15:36.487 "data_size": 63488 00:15:36.487 } 00:15:36.487 ] 00:15:36.487 }' 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.487 19:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.098 [2024-12-05 19:35:30.243827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.098 "name": "Existed_Raid", 00:15:37.098 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:37.098 "strip_size_kb": 0, 00:15:37.098 "state": "configuring", 00:15:37.098 "raid_level": "raid1", 00:15:37.098 "superblock": true, 00:15:37.098 "num_base_bdevs": 4, 00:15:37.098 "num_base_bdevs_discovered": 3, 00:15:37.098 "num_base_bdevs_operational": 4, 00:15:37.098 "base_bdevs_list": [ 00:15:37.098 { 00:15:37.098 "name": "BaseBdev1", 00:15:37.098 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:37.098 "is_configured": true, 00:15:37.098 "data_offset": 2048, 00:15:37.098 "data_size": 63488 00:15:37.098 }, 00:15:37.098 { 00:15:37.098 "name": null, 00:15:37.098 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:37.098 "is_configured": false, 00:15:37.098 "data_offset": 0, 00:15:37.098 "data_size": 63488 00:15:37.098 }, 00:15:37.098 { 00:15:37.098 "name": "BaseBdev3", 00:15:37.098 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:37.098 "is_configured": true, 00:15:37.098 "data_offset": 2048, 00:15:37.098 "data_size": 63488 00:15:37.098 }, 00:15:37.098 { 00:15:37.098 "name": "BaseBdev4", 00:15:37.098 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:37.098 "is_configured": true, 00:15:37.098 "data_offset": 2048, 00:15:37.098 "data_size": 63488 00:15:37.098 } 00:15:37.098 ] 00:15:37.098 }' 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.098 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.356 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.356 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.356 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.356 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.614 [2024-12-05 19:35:30.852041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.614 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.615 "name": "Existed_Raid", 00:15:37.615 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:37.615 "strip_size_kb": 0, 00:15:37.615 "state": "configuring", 00:15:37.615 "raid_level": "raid1", 00:15:37.615 "superblock": true, 00:15:37.615 "num_base_bdevs": 4, 00:15:37.615 "num_base_bdevs_discovered": 2, 00:15:37.615 "num_base_bdevs_operational": 4, 00:15:37.615 "base_bdevs_list": [ 00:15:37.615 { 00:15:37.615 "name": null, 00:15:37.615 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:37.615 "is_configured": false, 00:15:37.615 "data_offset": 0, 00:15:37.615 "data_size": 63488 00:15:37.615 }, 00:15:37.615 { 00:15:37.615 "name": null, 00:15:37.615 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:37.615 "is_configured": false, 00:15:37.615 "data_offset": 0, 00:15:37.615 "data_size": 63488 00:15:37.615 }, 00:15:37.615 { 00:15:37.615 "name": "BaseBdev3", 00:15:37.615 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:37.615 "is_configured": true, 00:15:37.615 "data_offset": 2048, 00:15:37.615 "data_size": 63488 00:15:37.615 }, 00:15:37.615 { 00:15:37.615 "name": "BaseBdev4", 00:15:37.615 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:37.615 "is_configured": true, 00:15:37.615 "data_offset": 2048, 00:15:37.615 "data_size": 63488 00:15:37.615 } 00:15:37.615 ] 00:15:37.615 }' 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.615 19:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 [2024-12-05 19:35:31.549512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.181 "name": "Existed_Raid", 00:15:38.181 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:38.181 "strip_size_kb": 0, 00:15:38.181 "state": "configuring", 00:15:38.181 "raid_level": "raid1", 00:15:38.181 "superblock": true, 00:15:38.181 "num_base_bdevs": 4, 00:15:38.181 "num_base_bdevs_discovered": 3, 00:15:38.181 "num_base_bdevs_operational": 4, 00:15:38.181 "base_bdevs_list": [ 00:15:38.181 { 00:15:38.181 "name": null, 00:15:38.181 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:38.181 "is_configured": false, 00:15:38.181 "data_offset": 0, 00:15:38.181 "data_size": 63488 00:15:38.181 }, 00:15:38.181 { 00:15:38.181 "name": "BaseBdev2", 00:15:38.181 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:38.181 "is_configured": true, 00:15:38.181 "data_offset": 2048, 00:15:38.181 "data_size": 63488 00:15:38.181 }, 00:15:38.181 { 00:15:38.181 "name": "BaseBdev3", 00:15:38.181 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:38.181 "is_configured": true, 00:15:38.181 "data_offset": 2048, 00:15:38.181 "data_size": 63488 00:15:38.181 }, 00:15:38.181 { 00:15:38.181 "name": "BaseBdev4", 00:15:38.181 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:38.181 "is_configured": true, 00:15:38.181 "data_offset": 2048, 00:15:38.181 "data_size": 63488 00:15:38.181 } 00:15:38.181 ] 00:15:38.181 }' 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.181 19:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.747 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.747 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.748 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a2bcaba-f079-48c4-8986-82afe4a7bc4e 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.006 [2024-12-05 19:35:32.236645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:39.006 [2024-12-05 19:35:32.237003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:39.006 [2024-12-05 19:35:32.237027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:39.006 [2024-12-05 19:35:32.237351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:39.006 NewBaseBdev 00:15:39.006 [2024-12-05 19:35:32.237555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:39.006 [2024-12-05 19:35:32.237571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:39.006 [2024-12-05 19:35:32.237763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.006 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.006 [ 00:15:39.006 { 00:15:39.006 "name": "NewBaseBdev", 00:15:39.006 "aliases": [ 00:15:39.006 "0a2bcaba-f079-48c4-8986-82afe4a7bc4e" 00:15:39.006 ], 00:15:39.006 "product_name": "Malloc disk", 00:15:39.006 "block_size": 512, 00:15:39.006 "num_blocks": 65536, 00:15:39.006 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:39.006 "assigned_rate_limits": { 00:15:39.006 "rw_ios_per_sec": 0, 00:15:39.006 "rw_mbytes_per_sec": 0, 00:15:39.006 "r_mbytes_per_sec": 0, 00:15:39.006 "w_mbytes_per_sec": 0 00:15:39.006 }, 00:15:39.006 "claimed": true, 00:15:39.006 "claim_type": "exclusive_write", 00:15:39.006 "zoned": false, 00:15:39.006 "supported_io_types": { 00:15:39.006 "read": true, 00:15:39.006 "write": true, 00:15:39.006 "unmap": true, 00:15:39.006 "flush": true, 00:15:39.006 "reset": true, 00:15:39.006 "nvme_admin": false, 00:15:39.006 "nvme_io": false, 00:15:39.006 "nvme_io_md": false, 00:15:39.006 "write_zeroes": true, 00:15:39.006 "zcopy": true, 00:15:39.006 "get_zone_info": false, 00:15:39.006 "zone_management": false, 00:15:39.006 "zone_append": false, 00:15:39.006 "compare": false, 00:15:39.006 "compare_and_write": false, 00:15:39.006 "abort": true, 00:15:39.006 "seek_hole": false, 00:15:39.006 "seek_data": false, 00:15:39.006 "copy": true, 00:15:39.006 "nvme_iov_md": false 00:15:39.006 }, 00:15:39.006 "memory_domains": [ 00:15:39.006 { 00:15:39.006 "dma_device_id": "system", 00:15:39.006 "dma_device_type": 1 00:15:39.006 }, 00:15:39.007 { 00:15:39.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.007 "dma_device_type": 2 00:15:39.007 } 00:15:39.007 ], 00:15:39.007 "driver_specific": {} 00:15:39.007 } 00:15:39.007 ] 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.007 "name": "Existed_Raid", 00:15:39.007 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:39.007 "strip_size_kb": 0, 00:15:39.007 "state": "online", 00:15:39.007 "raid_level": "raid1", 00:15:39.007 "superblock": true, 00:15:39.007 "num_base_bdevs": 4, 00:15:39.007 "num_base_bdevs_discovered": 4, 00:15:39.007 "num_base_bdevs_operational": 4, 00:15:39.007 "base_bdevs_list": [ 00:15:39.007 { 00:15:39.007 "name": "NewBaseBdev", 00:15:39.007 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 2048, 00:15:39.007 "data_size": 63488 00:15:39.007 }, 00:15:39.007 { 00:15:39.007 "name": "BaseBdev2", 00:15:39.007 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 2048, 00:15:39.007 "data_size": 63488 00:15:39.007 }, 00:15:39.007 { 00:15:39.007 "name": "BaseBdev3", 00:15:39.007 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 2048, 00:15:39.007 "data_size": 63488 00:15:39.007 }, 00:15:39.007 { 00:15:39.007 "name": "BaseBdev4", 00:15:39.007 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:39.007 "is_configured": true, 00:15:39.007 "data_offset": 2048, 00:15:39.007 "data_size": 63488 00:15:39.007 } 00:15:39.007 ] 00:15:39.007 }' 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.007 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.574 [2024-12-05 19:35:32.793317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.574 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.574 "name": "Existed_Raid", 00:15:39.574 "aliases": [ 00:15:39.574 "86598a4f-107e-41b8-976d-8bae6cf5153f" 00:15:39.574 ], 00:15:39.574 "product_name": "Raid Volume", 00:15:39.574 "block_size": 512, 00:15:39.574 "num_blocks": 63488, 00:15:39.574 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:39.574 "assigned_rate_limits": { 00:15:39.574 "rw_ios_per_sec": 0, 00:15:39.574 "rw_mbytes_per_sec": 0, 00:15:39.574 "r_mbytes_per_sec": 0, 00:15:39.574 "w_mbytes_per_sec": 0 00:15:39.574 }, 00:15:39.574 "claimed": false, 00:15:39.574 "zoned": false, 00:15:39.574 "supported_io_types": { 00:15:39.574 "read": true, 00:15:39.574 "write": true, 00:15:39.574 "unmap": false, 00:15:39.574 "flush": false, 00:15:39.574 "reset": true, 00:15:39.574 "nvme_admin": false, 00:15:39.574 "nvme_io": false, 00:15:39.574 "nvme_io_md": false, 00:15:39.574 "write_zeroes": true, 00:15:39.574 "zcopy": false, 00:15:39.574 "get_zone_info": false, 00:15:39.574 "zone_management": false, 00:15:39.574 "zone_append": false, 00:15:39.574 "compare": false, 00:15:39.574 "compare_and_write": false, 00:15:39.575 "abort": false, 00:15:39.575 "seek_hole": false, 00:15:39.575 "seek_data": false, 00:15:39.575 "copy": false, 00:15:39.575 "nvme_iov_md": false 00:15:39.575 }, 00:15:39.575 "memory_domains": [ 00:15:39.575 { 00:15:39.575 "dma_device_id": "system", 00:15:39.575 "dma_device_type": 1 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.575 "dma_device_type": 2 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "system", 00:15:39.575 "dma_device_type": 1 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.575 "dma_device_type": 2 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "system", 00:15:39.575 "dma_device_type": 1 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.575 "dma_device_type": 2 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "system", 00:15:39.575 "dma_device_type": 1 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.575 "dma_device_type": 2 00:15:39.575 } 00:15:39.575 ], 00:15:39.575 "driver_specific": { 00:15:39.575 "raid": { 00:15:39.575 "uuid": "86598a4f-107e-41b8-976d-8bae6cf5153f", 00:15:39.575 "strip_size_kb": 0, 00:15:39.575 "state": "online", 00:15:39.575 "raid_level": "raid1", 00:15:39.575 "superblock": true, 00:15:39.575 "num_base_bdevs": 4, 00:15:39.575 "num_base_bdevs_discovered": 4, 00:15:39.575 "num_base_bdevs_operational": 4, 00:15:39.575 "base_bdevs_list": [ 00:15:39.575 { 00:15:39.575 "name": "NewBaseBdev", 00:15:39.575 "uuid": "0a2bcaba-f079-48c4-8986-82afe4a7bc4e", 00:15:39.575 "is_configured": true, 00:15:39.575 "data_offset": 2048, 00:15:39.575 "data_size": 63488 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "name": "BaseBdev2", 00:15:39.575 "uuid": "be1c039e-4a71-409b-a993-8f30885e88ed", 00:15:39.575 "is_configured": true, 00:15:39.575 "data_offset": 2048, 00:15:39.575 "data_size": 63488 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "name": "BaseBdev3", 00:15:39.575 "uuid": "3600834f-8af2-4e84-a6cb-c11c7a68eb22", 00:15:39.575 "is_configured": true, 00:15:39.575 "data_offset": 2048, 00:15:39.575 "data_size": 63488 00:15:39.575 }, 00:15:39.575 { 00:15:39.575 "name": "BaseBdev4", 00:15:39.575 "uuid": "0d5bf8c5-7876-4920-96a6-20d873be73cc", 00:15:39.575 "is_configured": true, 00:15:39.575 "data_offset": 2048, 00:15:39.575 "data_size": 63488 00:15:39.575 } 00:15:39.575 ] 00:15:39.575 } 00:15:39.575 } 00:15:39.575 }' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:39.575 BaseBdev2 00:15:39.575 BaseBdev3 00:15:39.575 BaseBdev4' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.575 19:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.575 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.834 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.834 [2024-12-05 19:35:33.189011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.834 [2024-12-05 19:35:33.189061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.834 [2024-12-05 19:35:33.189201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.835 [2024-12-05 19:35:33.189587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.835 [2024-12-05 19:35:33.189618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73998 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73998 ']' 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73998 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73998 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.835 killing process with pid 73998 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73998' 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73998 00:15:39.835 [2024-12-05 19:35:33.229455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.835 19:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73998 00:15:40.404 [2024-12-05 19:35:33.573979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.343 19:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:41.343 00:15:41.343 real 0m12.874s 00:15:41.343 user 0m21.446s 00:15:41.343 sys 0m1.817s 00:15:41.343 19:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.343 19:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.343 ************************************ 00:15:41.343 END TEST raid_state_function_test_sb 00:15:41.343 ************************************ 00:15:41.343 19:35:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:41.343 19:35:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:41.343 19:35:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.343 19:35:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.343 ************************************ 00:15:41.343 START TEST raid_superblock_test 00:15:41.343 ************************************ 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74674 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74674 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74674 ']' 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.343 19:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.343 [2024-12-05 19:35:34.739728] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:41.343 [2024-12-05 19:35:34.739883] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74674 ] 00:15:41.602 [2024-12-05 19:35:34.909628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.602 [2024-12-05 19:35:35.040087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.860 [2024-12-05 19:35:35.235424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.860 [2024-12-05 19:35:35.235474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.427 malloc1 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.427 [2024-12-05 19:35:35.758250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.427 [2024-12-05 19:35:35.758346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.427 [2024-12-05 19:35:35.758377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:42.427 [2024-12-05 19:35:35.758393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.427 [2024-12-05 19:35:35.761277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.427 [2024-12-05 19:35:35.761335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.427 pt1 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.427 malloc2 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.427 [2024-12-05 19:35:35.806166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.427 [2024-12-05 19:35:35.806250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.427 [2024-12-05 19:35:35.806287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:42.427 [2024-12-05 19:35:35.806303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.427 [2024-12-05 19:35:35.809014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.427 [2024-12-05 19:35:35.809062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.427 pt2 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.427 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 malloc3 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 [2024-12-05 19:35:35.883936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:42.687 [2024-12-05 19:35:35.884003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.687 [2024-12-05 19:35:35.884036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:42.687 [2024-12-05 19:35:35.884052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.687 [2024-12-05 19:35:35.886862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.687 [2024-12-05 19:35:35.886920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:42.687 pt3 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 malloc4 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 [2024-12-05 19:35:35.935633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:42.687 [2024-12-05 19:35:35.935742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.687 [2024-12-05 19:35:35.935776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:42.687 [2024-12-05 19:35:35.935792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.687 [2024-12-05 19:35:35.938494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.687 [2024-12-05 19:35:35.938551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:42.687 pt4 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.687 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 [2024-12-05 19:35:35.943645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:42.687 [2024-12-05 19:35:35.946044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.687 [2024-12-05 19:35:35.946194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:42.687 [2024-12-05 19:35:35.946284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:42.687 [2024-12-05 19:35:35.946569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:42.687 [2024-12-05 19:35:35.946601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:42.687 [2024-12-05 19:35:35.946954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:42.687 [2024-12-05 19:35:35.947258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:42.687 [2024-12-05 19:35:35.947291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:42.688 [2024-12-05 19:35:35.947470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.688 19:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.688 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.688 "name": "raid_bdev1", 00:15:42.688 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:42.688 "strip_size_kb": 0, 00:15:42.688 "state": "online", 00:15:42.688 "raid_level": "raid1", 00:15:42.688 "superblock": true, 00:15:42.688 "num_base_bdevs": 4, 00:15:42.688 "num_base_bdevs_discovered": 4, 00:15:42.688 "num_base_bdevs_operational": 4, 00:15:42.688 "base_bdevs_list": [ 00:15:42.688 { 00:15:42.688 "name": "pt1", 00:15:42.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.688 "is_configured": true, 00:15:42.688 "data_offset": 2048, 00:15:42.688 "data_size": 63488 00:15:42.688 }, 00:15:42.688 { 00:15:42.688 "name": "pt2", 00:15:42.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.688 "is_configured": true, 00:15:42.688 "data_offset": 2048, 00:15:42.688 "data_size": 63488 00:15:42.688 }, 00:15:42.688 { 00:15:42.688 "name": "pt3", 00:15:42.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.688 "is_configured": true, 00:15:42.688 "data_offset": 2048, 00:15:42.688 "data_size": 63488 00:15:42.688 }, 00:15:42.688 { 00:15:42.688 "name": "pt4", 00:15:42.688 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.688 "is_configured": true, 00:15:42.688 "data_offset": 2048, 00:15:42.688 "data_size": 63488 00:15:42.688 } 00:15:42.688 ] 00:15:42.688 }' 00:15:42.688 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.688 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.256 [2024-12-05 19:35:36.480352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.256 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.256 "name": "raid_bdev1", 00:15:43.256 "aliases": [ 00:15:43.256 "c0fa2ea3-8056-4888-afdc-4e9aec359659" 00:15:43.256 ], 00:15:43.256 "product_name": "Raid Volume", 00:15:43.256 "block_size": 512, 00:15:43.256 "num_blocks": 63488, 00:15:43.256 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:43.256 "assigned_rate_limits": { 00:15:43.256 "rw_ios_per_sec": 0, 00:15:43.256 "rw_mbytes_per_sec": 0, 00:15:43.256 "r_mbytes_per_sec": 0, 00:15:43.256 "w_mbytes_per_sec": 0 00:15:43.256 }, 00:15:43.256 "claimed": false, 00:15:43.256 "zoned": false, 00:15:43.256 "supported_io_types": { 00:15:43.256 "read": true, 00:15:43.256 "write": true, 00:15:43.256 "unmap": false, 00:15:43.256 "flush": false, 00:15:43.256 "reset": true, 00:15:43.256 "nvme_admin": false, 00:15:43.256 "nvme_io": false, 00:15:43.256 "nvme_io_md": false, 00:15:43.256 "write_zeroes": true, 00:15:43.256 "zcopy": false, 00:15:43.256 "get_zone_info": false, 00:15:43.256 "zone_management": false, 00:15:43.256 "zone_append": false, 00:15:43.256 "compare": false, 00:15:43.256 "compare_and_write": false, 00:15:43.256 "abort": false, 00:15:43.256 "seek_hole": false, 00:15:43.256 "seek_data": false, 00:15:43.256 "copy": false, 00:15:43.256 "nvme_iov_md": false 00:15:43.256 }, 00:15:43.256 "memory_domains": [ 00:15:43.256 { 00:15:43.256 "dma_device_id": "system", 00:15:43.256 "dma_device_type": 1 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.256 "dma_device_type": 2 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "system", 00:15:43.256 "dma_device_type": 1 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.256 "dma_device_type": 2 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "system", 00:15:43.256 "dma_device_type": 1 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.256 "dma_device_type": 2 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "system", 00:15:43.256 "dma_device_type": 1 00:15:43.256 }, 00:15:43.256 { 00:15:43.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.256 "dma_device_type": 2 00:15:43.256 } 00:15:43.256 ], 00:15:43.256 "driver_specific": { 00:15:43.256 "raid": { 00:15:43.256 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:43.256 "strip_size_kb": 0, 00:15:43.256 "state": "online", 00:15:43.256 "raid_level": "raid1", 00:15:43.256 "superblock": true, 00:15:43.256 "num_base_bdevs": 4, 00:15:43.256 "num_base_bdevs_discovered": 4, 00:15:43.257 "num_base_bdevs_operational": 4, 00:15:43.257 "base_bdevs_list": [ 00:15:43.257 { 00:15:43.257 "name": "pt1", 00:15:43.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.257 "is_configured": true, 00:15:43.257 "data_offset": 2048, 00:15:43.257 "data_size": 63488 00:15:43.257 }, 00:15:43.257 { 00:15:43.257 "name": "pt2", 00:15:43.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.257 "is_configured": true, 00:15:43.257 "data_offset": 2048, 00:15:43.257 "data_size": 63488 00:15:43.257 }, 00:15:43.257 { 00:15:43.257 "name": "pt3", 00:15:43.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.257 "is_configured": true, 00:15:43.257 "data_offset": 2048, 00:15:43.257 "data_size": 63488 00:15:43.257 }, 00:15:43.257 { 00:15:43.257 "name": "pt4", 00:15:43.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.257 "is_configured": true, 00:15:43.257 "data_offset": 2048, 00:15:43.257 "data_size": 63488 00:15:43.257 } 00:15:43.257 ] 00:15:43.257 } 00:15:43.257 } 00:15:43.257 }' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:43.257 pt2 00:15:43.257 pt3 00:15:43.257 pt4' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.257 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.516 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.516 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.516 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.516 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:43.517 [2024-12-05 19:35:36.844363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c0fa2ea3-8056-4888-afdc-4e9aec359659 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c0fa2ea3-8056-4888-afdc-4e9aec359659 ']' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.517 [2024-12-05 19:35:36.895977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.517 [2024-12-05 19:35:36.896009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.517 [2024-12-05 19:35:36.896099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.517 [2024-12-05 19:35:36.896220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.517 [2024-12-05 19:35:36.896259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.517 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.777 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 [2024-12-05 19:35:37.048056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.778 [2024-12-05 19:35:37.050658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.778 [2024-12-05 19:35:37.050774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:43.778 [2024-12-05 19:35:37.050832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:43.778 [2024-12-05 19:35:37.050903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:43.778 [2024-12-05 19:35:37.050973] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:43.778 [2024-12-05 19:35:37.051007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:43.778 [2024-12-05 19:35:37.051048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:43.778 [2024-12-05 19:35:37.051070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.778 [2024-12-05 19:35:37.051089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:43.778 request: 00:15:43.778 { 00:15:43.778 "name": "raid_bdev1", 00:15:43.778 "raid_level": "raid1", 00:15:43.778 "base_bdevs": [ 00:15:43.778 "malloc1", 00:15:43.778 "malloc2", 00:15:43.778 "malloc3", 00:15:43.778 "malloc4" 00:15:43.778 ], 00:15:43.778 "superblock": false, 00:15:43.778 "method": "bdev_raid_create", 00:15:43.778 "req_id": 1 00:15:43.778 } 00:15:43.778 Got JSON-RPC error response 00:15:43.778 response: 00:15:43.778 { 00:15:43.778 "code": -17, 00:15:43.778 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.778 } 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 [2024-12-05 19:35:37.116039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.778 [2024-12-05 19:35:37.116130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.778 [2024-12-05 19:35:37.116154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:43.778 [2024-12-05 19:35:37.116171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.778 [2024-12-05 19:35:37.118913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.778 [2024-12-05 19:35:37.118964] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.778 [2024-12-05 19:35:37.119050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.778 [2024-12-05 19:35:37.119121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.778 pt1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.778 "name": "raid_bdev1", 00:15:43.778 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:43.778 "strip_size_kb": 0, 00:15:43.778 "state": "configuring", 00:15:43.778 "raid_level": "raid1", 00:15:43.778 "superblock": true, 00:15:43.778 "num_base_bdevs": 4, 00:15:43.778 "num_base_bdevs_discovered": 1, 00:15:43.778 "num_base_bdevs_operational": 4, 00:15:43.778 "base_bdevs_list": [ 00:15:43.778 { 00:15:43.778 "name": "pt1", 00:15:43.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.778 "is_configured": true, 00:15:43.778 "data_offset": 2048, 00:15:43.778 "data_size": 63488 00:15:43.778 }, 00:15:43.778 { 00:15:43.778 "name": null, 00:15:43.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.778 "is_configured": false, 00:15:43.778 "data_offset": 2048, 00:15:43.778 "data_size": 63488 00:15:43.778 }, 00:15:43.778 { 00:15:43.778 "name": null, 00:15:43.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.778 "is_configured": false, 00:15:43.778 "data_offset": 2048, 00:15:43.778 "data_size": 63488 00:15:43.778 }, 00:15:43.778 { 00:15:43.778 "name": null, 00:15:43.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.778 "is_configured": false, 00:15:43.778 "data_offset": 2048, 00:15:43.778 "data_size": 63488 00:15:43.778 } 00:15:43.778 ] 00:15:43.778 }' 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.778 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.379 [2024-12-05 19:35:37.628261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.379 [2024-12-05 19:35:37.628379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.379 [2024-12-05 19:35:37.628411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:44.379 [2024-12-05 19:35:37.628428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.379 [2024-12-05 19:35:37.629004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.379 [2024-12-05 19:35:37.629050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.379 [2024-12-05 19:35:37.629153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.379 [2024-12-05 19:35:37.629190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.379 pt2 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.379 [2024-12-05 19:35:37.636238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.379 "name": "raid_bdev1", 00:15:44.379 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:44.379 "strip_size_kb": 0, 00:15:44.379 "state": "configuring", 00:15:44.379 "raid_level": "raid1", 00:15:44.379 "superblock": true, 00:15:44.379 "num_base_bdevs": 4, 00:15:44.379 "num_base_bdevs_discovered": 1, 00:15:44.379 "num_base_bdevs_operational": 4, 00:15:44.379 "base_bdevs_list": [ 00:15:44.379 { 00:15:44.379 "name": "pt1", 00:15:44.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.379 "is_configured": true, 00:15:44.379 "data_offset": 2048, 00:15:44.379 "data_size": 63488 00:15:44.379 }, 00:15:44.379 { 00:15:44.379 "name": null, 00:15:44.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.379 "is_configured": false, 00:15:44.379 "data_offset": 0, 00:15:44.379 "data_size": 63488 00:15:44.379 }, 00:15:44.379 { 00:15:44.379 "name": null, 00:15:44.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.379 "is_configured": false, 00:15:44.379 "data_offset": 2048, 00:15:44.379 "data_size": 63488 00:15:44.379 }, 00:15:44.379 { 00:15:44.379 "name": null, 00:15:44.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.379 "is_configured": false, 00:15:44.379 "data_offset": 2048, 00:15:44.379 "data_size": 63488 00:15:44.379 } 00:15:44.379 ] 00:15:44.379 }' 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.379 19:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [2024-12-05 19:35:38.148542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.947 [2024-12-05 19:35:38.148617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.947 [2024-12-05 19:35:38.148649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:44.947 [2024-12-05 19:35:38.148665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.947 [2024-12-05 19:35:38.149241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.947 [2024-12-05 19:35:38.149278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.947 [2024-12-05 19:35:38.149384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.947 [2024-12-05 19:35:38.149416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.947 pt2 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [2024-12-05 19:35:38.156495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:44.947 [2024-12-05 19:35:38.156550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.947 [2024-12-05 19:35:38.156577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:44.947 [2024-12-05 19:35:38.156591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.947 [2024-12-05 19:35:38.157048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.947 [2024-12-05 19:35:38.157087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:44.947 [2024-12-05 19:35:38.157171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:44.947 [2024-12-05 19:35:38.157198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:44.947 pt3 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [2024-12-05 19:35:38.164473] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:44.947 [2024-12-05 19:35:38.164523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.947 [2024-12-05 19:35:38.164548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:44.947 [2024-12-05 19:35:38.164562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.947 [2024-12-05 19:35:38.165026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.947 [2024-12-05 19:35:38.165070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:44.947 [2024-12-05 19:35:38.165151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:44.947 [2024-12-05 19:35:38.165187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:44.947 [2024-12-05 19:35:38.165365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.947 [2024-12-05 19:35:38.165392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:44.947 [2024-12-05 19:35:38.165732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:44.947 [2024-12-05 19:35:38.165937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.947 [2024-12-05 19:35:38.165967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:44.947 [2024-12-05 19:35:38.166128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.947 pt4 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.947 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.947 "name": "raid_bdev1", 00:15:44.947 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:44.947 "strip_size_kb": 0, 00:15:44.947 "state": "online", 00:15:44.947 "raid_level": "raid1", 00:15:44.947 "superblock": true, 00:15:44.947 "num_base_bdevs": 4, 00:15:44.947 "num_base_bdevs_discovered": 4, 00:15:44.947 "num_base_bdevs_operational": 4, 00:15:44.947 "base_bdevs_list": [ 00:15:44.947 { 00:15:44.947 "name": "pt1", 00:15:44.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.947 "is_configured": true, 00:15:44.947 "data_offset": 2048, 00:15:44.947 "data_size": 63488 00:15:44.947 }, 00:15:44.947 { 00:15:44.947 "name": "pt2", 00:15:44.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.947 "is_configured": true, 00:15:44.947 "data_offset": 2048, 00:15:44.947 "data_size": 63488 00:15:44.947 }, 00:15:44.947 { 00:15:44.947 "name": "pt3", 00:15:44.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.947 "is_configured": true, 00:15:44.947 "data_offset": 2048, 00:15:44.947 "data_size": 63488 00:15:44.948 }, 00:15:44.948 { 00:15:44.948 "name": "pt4", 00:15:44.948 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.948 "is_configured": true, 00:15:44.948 "data_offset": 2048, 00:15:44.948 "data_size": 63488 00:15:44.948 } 00:15:44.948 ] 00:15:44.948 }' 00:15:44.948 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.948 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 [2024-12-05 19:35:38.701169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.515 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.515 "name": "raid_bdev1", 00:15:45.515 "aliases": [ 00:15:45.515 "c0fa2ea3-8056-4888-afdc-4e9aec359659" 00:15:45.515 ], 00:15:45.515 "product_name": "Raid Volume", 00:15:45.515 "block_size": 512, 00:15:45.515 "num_blocks": 63488, 00:15:45.515 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:45.515 "assigned_rate_limits": { 00:15:45.515 "rw_ios_per_sec": 0, 00:15:45.515 "rw_mbytes_per_sec": 0, 00:15:45.515 "r_mbytes_per_sec": 0, 00:15:45.515 "w_mbytes_per_sec": 0 00:15:45.515 }, 00:15:45.515 "claimed": false, 00:15:45.515 "zoned": false, 00:15:45.515 "supported_io_types": { 00:15:45.515 "read": true, 00:15:45.515 "write": true, 00:15:45.515 "unmap": false, 00:15:45.515 "flush": false, 00:15:45.515 "reset": true, 00:15:45.515 "nvme_admin": false, 00:15:45.515 "nvme_io": false, 00:15:45.515 "nvme_io_md": false, 00:15:45.515 "write_zeroes": true, 00:15:45.515 "zcopy": false, 00:15:45.515 "get_zone_info": false, 00:15:45.515 "zone_management": false, 00:15:45.515 "zone_append": false, 00:15:45.515 "compare": false, 00:15:45.515 "compare_and_write": false, 00:15:45.515 "abort": false, 00:15:45.515 "seek_hole": false, 00:15:45.515 "seek_data": false, 00:15:45.515 "copy": false, 00:15:45.515 "nvme_iov_md": false 00:15:45.515 }, 00:15:45.515 "memory_domains": [ 00:15:45.515 { 00:15:45.515 "dma_device_id": "system", 00:15:45.515 "dma_device_type": 1 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.515 "dma_device_type": 2 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "system", 00:15:45.515 "dma_device_type": 1 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.515 "dma_device_type": 2 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "system", 00:15:45.515 "dma_device_type": 1 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.515 "dma_device_type": 2 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "system", 00:15:45.515 "dma_device_type": 1 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.515 "dma_device_type": 2 00:15:45.515 } 00:15:45.515 ], 00:15:45.515 "driver_specific": { 00:15:45.515 "raid": { 00:15:45.515 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:45.515 "strip_size_kb": 0, 00:15:45.515 "state": "online", 00:15:45.515 "raid_level": "raid1", 00:15:45.515 "superblock": true, 00:15:45.515 "num_base_bdevs": 4, 00:15:45.515 "num_base_bdevs_discovered": 4, 00:15:45.515 "num_base_bdevs_operational": 4, 00:15:45.515 "base_bdevs_list": [ 00:15:45.515 { 00:15:45.515 "name": "pt1", 00:15:45.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.515 "is_configured": true, 00:15:45.515 "data_offset": 2048, 00:15:45.515 "data_size": 63488 00:15:45.515 }, 00:15:45.515 { 00:15:45.515 "name": "pt2", 00:15:45.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 2048, 00:15:45.516 "data_size": 63488 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": "pt3", 00:15:45.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 2048, 00:15:45.516 "data_size": 63488 00:15:45.516 }, 00:15:45.516 { 00:15:45.516 "name": "pt4", 00:15:45.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.516 "is_configured": true, 00:15:45.516 "data_offset": 2048, 00:15:45.516 "data_size": 63488 00:15:45.516 } 00:15:45.516 ] 00:15:45.516 } 00:15:45.516 } 00:15:45.516 }' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:45.516 pt2 00:15:45.516 pt3 00:15:45.516 pt4' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.516 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 19:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 [2024-12-05 19:35:39.109170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c0fa2ea3-8056-4888-afdc-4e9aec359659 '!=' c0fa2ea3-8056-4888-afdc-4e9aec359659 ']' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 [2024-12-05 19:35:39.156868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.033 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.033 "name": "raid_bdev1", 00:15:46.033 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:46.033 "strip_size_kb": 0, 00:15:46.033 "state": "online", 00:15:46.033 "raid_level": "raid1", 00:15:46.033 "superblock": true, 00:15:46.033 "num_base_bdevs": 4, 00:15:46.033 "num_base_bdevs_discovered": 3, 00:15:46.033 "num_base_bdevs_operational": 3, 00:15:46.033 "base_bdevs_list": [ 00:15:46.033 { 00:15:46.033 "name": null, 00:15:46.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.033 "is_configured": false, 00:15:46.033 "data_offset": 0, 00:15:46.033 "data_size": 63488 00:15:46.033 }, 00:15:46.033 { 00:15:46.033 "name": "pt2", 00:15:46.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.033 "is_configured": true, 00:15:46.033 "data_offset": 2048, 00:15:46.033 "data_size": 63488 00:15:46.033 }, 00:15:46.033 { 00:15:46.033 "name": "pt3", 00:15:46.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.033 "is_configured": true, 00:15:46.033 "data_offset": 2048, 00:15:46.033 "data_size": 63488 00:15:46.033 }, 00:15:46.033 { 00:15:46.033 "name": "pt4", 00:15:46.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.033 "is_configured": true, 00:15:46.033 "data_offset": 2048, 00:15:46.033 "data_size": 63488 00:15:46.033 } 00:15:46.033 ] 00:15:46.033 }' 00:15:46.033 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.033 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.291 [2024-12-05 19:35:39.668987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.291 [2024-12-05 19:35:39.669029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.291 [2024-12-05 19:35:39.669143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.291 [2024-12-05 19:35:39.669262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.291 [2024-12-05 19:35:39.669278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.291 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.550 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.551 [2024-12-05 19:35:39.760991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.551 [2024-12-05 19:35:39.761054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.551 [2024-12-05 19:35:39.761084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:46.551 [2024-12-05 19:35:39.761099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.551 [2024-12-05 19:35:39.764077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.551 [2024-12-05 19:35:39.764165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.551 [2024-12-05 19:35:39.764283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.551 [2024-12-05 19:35:39.764341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.551 pt2 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.551 "name": "raid_bdev1", 00:15:46.551 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:46.551 "strip_size_kb": 0, 00:15:46.551 "state": "configuring", 00:15:46.551 "raid_level": "raid1", 00:15:46.551 "superblock": true, 00:15:46.551 "num_base_bdevs": 4, 00:15:46.551 "num_base_bdevs_discovered": 1, 00:15:46.551 "num_base_bdevs_operational": 3, 00:15:46.551 "base_bdevs_list": [ 00:15:46.551 { 00:15:46.551 "name": null, 00:15:46.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.551 "is_configured": false, 00:15:46.551 "data_offset": 2048, 00:15:46.551 "data_size": 63488 00:15:46.551 }, 00:15:46.551 { 00:15:46.551 "name": "pt2", 00:15:46.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.551 "is_configured": true, 00:15:46.551 "data_offset": 2048, 00:15:46.551 "data_size": 63488 00:15:46.551 }, 00:15:46.551 { 00:15:46.551 "name": null, 00:15:46.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.551 "is_configured": false, 00:15:46.551 "data_offset": 2048, 00:15:46.551 "data_size": 63488 00:15:46.551 }, 00:15:46.551 { 00:15:46.551 "name": null, 00:15:46.551 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.551 "is_configured": false, 00:15:46.551 "data_offset": 2048, 00:15:46.551 "data_size": 63488 00:15:46.551 } 00:15:46.551 ] 00:15:46.551 }' 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.551 19:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.118 [2024-12-05 19:35:40.293197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.118 [2024-12-05 19:35:40.293272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.118 [2024-12-05 19:35:40.293305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:47.118 [2024-12-05 19:35:40.293321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.118 [2024-12-05 19:35:40.293914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.118 [2024-12-05 19:35:40.293951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.118 [2024-12-05 19:35:40.294067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.118 [2024-12-05 19:35:40.294099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.118 pt3 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.118 "name": "raid_bdev1", 00:15:47.118 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:47.118 "strip_size_kb": 0, 00:15:47.118 "state": "configuring", 00:15:47.118 "raid_level": "raid1", 00:15:47.118 "superblock": true, 00:15:47.118 "num_base_bdevs": 4, 00:15:47.118 "num_base_bdevs_discovered": 2, 00:15:47.118 "num_base_bdevs_operational": 3, 00:15:47.118 "base_bdevs_list": [ 00:15:47.118 { 00:15:47.118 "name": null, 00:15:47.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.118 "is_configured": false, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "name": "pt2", 00:15:47.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.118 "is_configured": true, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "name": "pt3", 00:15:47.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.118 "is_configured": true, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 }, 00:15:47.118 { 00:15:47.118 "name": null, 00:15:47.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.118 "is_configured": false, 00:15:47.118 "data_offset": 2048, 00:15:47.118 "data_size": 63488 00:15:47.118 } 00:15:47.118 ] 00:15:47.118 }' 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.118 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.687 [2024-12-05 19:35:40.837354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:47.687 [2024-12-05 19:35:40.837439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.687 [2024-12-05 19:35:40.837479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:47.687 [2024-12-05 19:35:40.837495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.687 [2024-12-05 19:35:40.838092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.687 [2024-12-05 19:35:40.838127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:47.687 [2024-12-05 19:35:40.838236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:47.687 [2024-12-05 19:35:40.838268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:47.687 [2024-12-05 19:35:40.838435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:47.687 [2024-12-05 19:35:40.838451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:47.687 [2024-12-05 19:35:40.838777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:47.687 [2024-12-05 19:35:40.838977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:47.687 [2024-12-05 19:35:40.838999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:47.687 [2024-12-05 19:35:40.839166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.687 pt4 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.687 "name": "raid_bdev1", 00:15:47.687 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:47.687 "strip_size_kb": 0, 00:15:47.687 "state": "online", 00:15:47.687 "raid_level": "raid1", 00:15:47.687 "superblock": true, 00:15:47.687 "num_base_bdevs": 4, 00:15:47.687 "num_base_bdevs_discovered": 3, 00:15:47.687 "num_base_bdevs_operational": 3, 00:15:47.687 "base_bdevs_list": [ 00:15:47.687 { 00:15:47.687 "name": null, 00:15:47.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.687 "is_configured": false, 00:15:47.687 "data_offset": 2048, 00:15:47.687 "data_size": 63488 00:15:47.687 }, 00:15:47.687 { 00:15:47.687 "name": "pt2", 00:15:47.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.687 "is_configured": true, 00:15:47.687 "data_offset": 2048, 00:15:47.687 "data_size": 63488 00:15:47.687 }, 00:15:47.687 { 00:15:47.687 "name": "pt3", 00:15:47.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.687 "is_configured": true, 00:15:47.687 "data_offset": 2048, 00:15:47.687 "data_size": 63488 00:15:47.687 }, 00:15:47.687 { 00:15:47.687 "name": "pt4", 00:15:47.687 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.687 "is_configured": true, 00:15:47.687 "data_offset": 2048, 00:15:47.687 "data_size": 63488 00:15:47.687 } 00:15:47.687 ] 00:15:47.687 }' 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.687 19:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.946 [2024-12-05 19:35:41.369481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.946 [2024-12-05 19:35:41.369516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.946 [2024-12-05 19:35:41.369628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.946 [2024-12-05 19:35:41.369752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.946 [2024-12-05 19:35:41.369774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.946 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.206 [2024-12-05 19:35:41.437486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.206 [2024-12-05 19:35:41.437560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.206 [2024-12-05 19:35:41.437587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:48.206 [2024-12-05 19:35:41.437608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.206 [2024-12-05 19:35:41.440465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.206 [2024-12-05 19:35:41.440516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.206 [2024-12-05 19:35:41.440624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.206 [2024-12-05 19:35:41.440686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.206 [2024-12-05 19:35:41.440870] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:48.206 [2024-12-05 19:35:41.440905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.206 [2024-12-05 19:35:41.440928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:48.206 [2024-12-05 19:35:41.441001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.206 [2024-12-05 19:35:41.441159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.206 pt1 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.206 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.207 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.207 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.207 "name": "raid_bdev1", 00:15:48.207 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:48.207 "strip_size_kb": 0, 00:15:48.207 "state": "configuring", 00:15:48.207 "raid_level": "raid1", 00:15:48.207 "superblock": true, 00:15:48.207 "num_base_bdevs": 4, 00:15:48.207 "num_base_bdevs_discovered": 2, 00:15:48.207 "num_base_bdevs_operational": 3, 00:15:48.207 "base_bdevs_list": [ 00:15:48.207 { 00:15:48.207 "name": null, 00:15:48.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.207 "is_configured": false, 00:15:48.207 "data_offset": 2048, 00:15:48.207 "data_size": 63488 00:15:48.207 }, 00:15:48.207 { 00:15:48.207 "name": "pt2", 00:15:48.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.207 "is_configured": true, 00:15:48.207 "data_offset": 2048, 00:15:48.207 "data_size": 63488 00:15:48.207 }, 00:15:48.207 { 00:15:48.207 "name": "pt3", 00:15:48.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.207 "is_configured": true, 00:15:48.207 "data_offset": 2048, 00:15:48.207 "data_size": 63488 00:15:48.207 }, 00:15:48.207 { 00:15:48.207 "name": null, 00:15:48.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.207 "is_configured": false, 00:15:48.207 "data_offset": 2048, 00:15:48.207 "data_size": 63488 00:15:48.207 } 00:15:48.207 ] 00:15:48.207 }' 00:15:48.207 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.207 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.776 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:48.776 19:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:48.776 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.776 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.776 19:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.776 [2024-12-05 19:35:42.021688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:48.776 [2024-12-05 19:35:42.021793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.776 [2024-12-05 19:35:42.021830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:48.776 [2024-12-05 19:35:42.021846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.776 [2024-12-05 19:35:42.022394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.776 [2024-12-05 19:35:42.022431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:48.776 [2024-12-05 19:35:42.022535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:48.776 [2024-12-05 19:35:42.022567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:48.776 [2024-12-05 19:35:42.022753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:48.776 [2024-12-05 19:35:42.022770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.776 [2024-12-05 19:35:42.023097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:48.776 [2024-12-05 19:35:42.023278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:48.776 [2024-12-05 19:35:42.023310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:48.776 [2024-12-05 19:35:42.023485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.776 pt4 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.776 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.776 "name": "raid_bdev1", 00:15:48.776 "uuid": "c0fa2ea3-8056-4888-afdc-4e9aec359659", 00:15:48.776 "strip_size_kb": 0, 00:15:48.776 "state": "online", 00:15:48.776 "raid_level": "raid1", 00:15:48.776 "superblock": true, 00:15:48.776 "num_base_bdevs": 4, 00:15:48.776 "num_base_bdevs_discovered": 3, 00:15:48.776 "num_base_bdevs_operational": 3, 00:15:48.776 "base_bdevs_list": [ 00:15:48.776 { 00:15:48.776 "name": null, 00:15:48.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.776 "is_configured": false, 00:15:48.776 "data_offset": 2048, 00:15:48.776 "data_size": 63488 00:15:48.776 }, 00:15:48.776 { 00:15:48.776 "name": "pt2", 00:15:48.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.776 "is_configured": true, 00:15:48.776 "data_offset": 2048, 00:15:48.776 "data_size": 63488 00:15:48.776 }, 00:15:48.776 { 00:15:48.776 "name": "pt3", 00:15:48.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.776 "is_configured": true, 00:15:48.776 "data_offset": 2048, 00:15:48.776 "data_size": 63488 00:15:48.776 }, 00:15:48.776 { 00:15:48.776 "name": "pt4", 00:15:48.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.777 "is_configured": true, 00:15:48.777 "data_offset": 2048, 00:15:48.777 "data_size": 63488 00:15:48.777 } 00:15:48.777 ] 00:15:48.777 }' 00:15:48.777 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.777 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:49.345 [2024-12-05 19:35:42.610248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c0fa2ea3-8056-4888-afdc-4e9aec359659 '!=' c0fa2ea3-8056-4888-afdc-4e9aec359659 ']' 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74674 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74674 ']' 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74674 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74674 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.345 killing process with pid 74674 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74674' 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74674 00:15:49.345 [2024-12-05 19:35:42.692588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.345 [2024-12-05 19:35:42.692721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.345 19:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74674 00:15:49.345 [2024-12-05 19:35:42.692843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.345 [2024-12-05 19:35:42.692864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:49.605 [2024-12-05 19:35:43.045271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.983 19:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:50.983 00:15:50.983 real 0m9.445s 00:15:50.983 user 0m15.548s 00:15:50.983 sys 0m1.368s 00:15:50.983 19:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.983 19:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 ************************************ 00:15:50.983 END TEST raid_superblock_test 00:15:50.983 ************************************ 00:15:50.983 19:35:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:50.983 19:35:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:50.983 19:35:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.983 19:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 ************************************ 00:15:50.983 START TEST raid_read_error_test 00:15:50.983 ************************************ 00:15:50.983 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:50.983 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:50.983 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:50.983 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:50.983 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UlJAFgOdnU 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75178 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75178 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75178 ']' 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.984 19:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.984 [2024-12-05 19:35:44.284203] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:50.984 [2024-12-05 19:35:44.284397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:15:51.243 [2024-12-05 19:35:44.470818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.243 [2024-12-05 19:35:44.602268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.502 [2024-12-05 19:35:44.807007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.502 [2024-12-05 19:35:44.807060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 BaseBdev1_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 true 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 [2024-12-05 19:35:45.335303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:52.070 [2024-12-05 19:35:45.335366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.070 [2024-12-05 19:35:45.335393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:52.070 [2024-12-05 19:35:45.335410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.070 [2024-12-05 19:35:45.338263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.070 [2024-12-05 19:35:45.338308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.070 BaseBdev1 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 BaseBdev2_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 true 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 [2024-12-05 19:35:45.396083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:52.070 [2024-12-05 19:35:45.396156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.070 [2024-12-05 19:35:45.396181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:52.070 [2024-12-05 19:35:45.396198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.070 [2024-12-05 19:35:45.399190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.070 [2024-12-05 19:35:45.399233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:52.070 BaseBdev2 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 BaseBdev3_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 true 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.070 [2024-12-05 19:35:45.468999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:52.070 [2024-12-05 19:35:45.469065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.070 [2024-12-05 19:35:45.469107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:52.070 [2024-12-05 19:35:45.469140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.070 [2024-12-05 19:35:45.471969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.070 [2024-12-05 19:35:45.472031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:52.070 BaseBdev3 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.070 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.329 BaseBdev4_malloc 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.329 true 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.329 [2024-12-05 19:35:45.526561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:52.329 [2024-12-05 19:35:45.526625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.329 [2024-12-05 19:35:45.526655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:52.329 [2024-12-05 19:35:45.526672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.329 [2024-12-05 19:35:45.529583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.329 [2024-12-05 19:35:45.529632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:52.329 BaseBdev4 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.329 [2024-12-05 19:35:45.534640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.329 [2024-12-05 19:35:45.537086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.329 [2024-12-05 19:35:45.537213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.329 [2024-12-05 19:35:45.537316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:52.329 [2024-12-05 19:35:45.537628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:52.329 [2024-12-05 19:35:45.537659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.329 [2024-12-05 19:35:45.537984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:52.329 [2024-12-05 19:35:45.538216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:52.329 [2024-12-05 19:35:45.538240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:52.329 [2024-12-05 19:35:45.538434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.329 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.329 "name": "raid_bdev1", 00:15:52.329 "uuid": "2fc35e90-a89f-4dee-a0ca-8a4b80ea04c0", 00:15:52.329 "strip_size_kb": 0, 00:15:52.329 "state": "online", 00:15:52.329 "raid_level": "raid1", 00:15:52.329 "superblock": true, 00:15:52.329 "num_base_bdevs": 4, 00:15:52.329 "num_base_bdevs_discovered": 4, 00:15:52.329 "num_base_bdevs_operational": 4, 00:15:52.329 "base_bdevs_list": [ 00:15:52.329 { 00:15:52.329 "name": "BaseBdev1", 00:15:52.329 "uuid": "e6adecf9-888e-57cf-977d-486c25a2c281", 00:15:52.329 "is_configured": true, 00:15:52.329 "data_offset": 2048, 00:15:52.329 "data_size": 63488 00:15:52.330 }, 00:15:52.330 { 00:15:52.330 "name": "BaseBdev2", 00:15:52.330 "uuid": "3f1a1f8f-9613-50e1-8c3f-8988f6ddbef9", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 }, 00:15:52.330 { 00:15:52.330 "name": "BaseBdev3", 00:15:52.330 "uuid": "5c9bab02-1928-56bd-9dff-3ed7d226a74f", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 }, 00:15:52.330 { 00:15:52.330 "name": "BaseBdev4", 00:15:52.330 "uuid": "9dec1e30-82f7-55d5-ab11-1c6e066d1893", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 } 00:15:52.330 ] 00:15:52.330 }' 00:15:52.330 19:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.330 19:35:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.898 19:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:52.898 19:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:52.898 [2024-12-05 19:35:46.152243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:53.835 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:53.835 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.836 "name": "raid_bdev1", 00:15:53.836 "uuid": "2fc35e90-a89f-4dee-a0ca-8a4b80ea04c0", 00:15:53.836 "strip_size_kb": 0, 00:15:53.836 "state": "online", 00:15:53.836 "raid_level": "raid1", 00:15:53.836 "superblock": true, 00:15:53.836 "num_base_bdevs": 4, 00:15:53.836 "num_base_bdevs_discovered": 4, 00:15:53.836 "num_base_bdevs_operational": 4, 00:15:53.836 "base_bdevs_list": [ 00:15:53.836 { 00:15:53.836 "name": "BaseBdev1", 00:15:53.836 "uuid": "e6adecf9-888e-57cf-977d-486c25a2c281", 00:15:53.836 "is_configured": true, 00:15:53.836 "data_offset": 2048, 00:15:53.836 "data_size": 63488 00:15:53.836 }, 00:15:53.836 { 00:15:53.836 "name": "BaseBdev2", 00:15:53.836 "uuid": "3f1a1f8f-9613-50e1-8c3f-8988f6ddbef9", 00:15:53.836 "is_configured": true, 00:15:53.836 "data_offset": 2048, 00:15:53.836 "data_size": 63488 00:15:53.836 }, 00:15:53.836 { 00:15:53.836 "name": "BaseBdev3", 00:15:53.836 "uuid": "5c9bab02-1928-56bd-9dff-3ed7d226a74f", 00:15:53.836 "is_configured": true, 00:15:53.836 "data_offset": 2048, 00:15:53.836 "data_size": 63488 00:15:53.836 }, 00:15:53.836 { 00:15:53.836 "name": "BaseBdev4", 00:15:53.836 "uuid": "9dec1e30-82f7-55d5-ab11-1c6e066d1893", 00:15:53.836 "is_configured": true, 00:15:53.836 "data_offset": 2048, 00:15:53.836 "data_size": 63488 00:15:53.836 } 00:15:53.836 ] 00:15:53.836 }' 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.836 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.405 [2024-12-05 19:35:47.601648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.405 [2024-12-05 19:35:47.601702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.405 [2024-12-05 19:35:47.605272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.405 [2024-12-05 19:35:47.605364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.405 [2024-12-05 19:35:47.605506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.405 [2024-12-05 19:35:47.605525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:54.405 { 00:15:54.405 "results": [ 00:15:54.405 { 00:15:54.405 "job": "raid_bdev1", 00:15:54.405 "core_mask": "0x1", 00:15:54.405 "workload": "randrw", 00:15:54.405 "percentage": 50, 00:15:54.405 "status": "finished", 00:15:54.405 "queue_depth": 1, 00:15:54.405 "io_size": 131072, 00:15:54.405 "runtime": 1.447247, 00:15:54.405 "iops": 7432.041662549655, 00:15:54.405 "mibps": 929.0052078187068, 00:15:54.405 "io_failed": 0, 00:15:54.405 "io_timeout": 0, 00:15:54.405 "avg_latency_us": 130.18987795395384, 00:15:54.405 "min_latency_us": 40.02909090909091, 00:15:54.405 "max_latency_us": 1966.08 00:15:54.405 } 00:15:54.405 ], 00:15:54.405 "core_count": 1 00:15:54.405 } 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75178 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75178 ']' 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75178 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75178 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.405 killing process with pid 75178 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75178' 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75178 00:15:54.405 [2024-12-05 19:35:47.643420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.405 19:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75178 00:15:54.664 [2024-12-05 19:35:47.927851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UlJAFgOdnU 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:55.602 00:15:55.602 real 0m4.859s 00:15:55.602 user 0m5.982s 00:15:55.602 sys 0m0.626s 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.602 19:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.602 ************************************ 00:15:55.602 END TEST raid_read_error_test 00:15:55.602 ************************************ 00:15:55.862 19:35:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:55.862 19:35:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:55.862 19:35:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.862 19:35:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.862 ************************************ 00:15:55.862 START TEST raid_write_error_test 00:15:55.862 ************************************ 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hgQLIk3VUV 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75318 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75318 00:15:55.862 19:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:55.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75318 ']' 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.863 19:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.863 [2024-12-05 19:35:49.191069] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:15:55.863 [2024-12-05 19:35:49.191517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75318 ] 00:15:56.121 [2024-12-05 19:35:49.379024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.121 [2024-12-05 19:35:49.527836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.380 [2024-12-05 19:35:49.728634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.380 [2024-12-05 19:35:49.728999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 BaseBdev1_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 true 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 [2024-12-05 19:35:50.238787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:56.988 [2024-12-05 19:35:50.238866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.988 [2024-12-05 19:35:50.238895] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:56.988 [2024-12-05 19:35:50.238912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.988 [2024-12-05 19:35:50.241679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.988 [2024-12-05 19:35:50.241753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.988 BaseBdev1 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 BaseBdev2_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 true 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 [2024-12-05 19:35:50.298042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:56.988 [2024-12-05 19:35:50.298105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.988 [2024-12-05 19:35:50.298130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:56.988 [2024-12-05 19:35:50.298147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.988 [2024-12-05 19:35:50.301153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.988 [2024-12-05 19:35:50.301216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.988 BaseBdev2 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 BaseBdev3_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 true 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.988 [2024-12-05 19:35:50.366959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:56.988 [2024-12-05 19:35:50.367023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.988 [2024-12-05 19:35:50.367048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:56.988 [2024-12-05 19:35:50.367065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.988 [2024-12-05 19:35:50.369964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.988 [2024-12-05 19:35:50.370016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:56.988 BaseBdev3 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:56.988 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.989 BaseBdev4_malloc 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.989 true 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.989 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.989 [2024-12-05 19:35:50.426198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:56.989 [2024-12-05 19:35:50.426293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.989 [2024-12-05 19:35:50.426318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:56.989 [2024-12-05 19:35:50.426334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.248 [2024-12-05 19:35:50.429357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.248 [2024-12-05 19:35:50.429406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:57.248 BaseBdev4 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.248 [2024-12-05 19:35:50.434375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.248 [2024-12-05 19:35:50.436933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.248 [2024-12-05 19:35:50.437037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.248 [2024-12-05 19:35:50.437164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.248 [2024-12-05 19:35:50.437480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:57.248 [2024-12-05 19:35:50.437502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:57.248 [2024-12-05 19:35:50.437846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:57.248 [2024-12-05 19:35:50.438081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:57.248 [2024-12-05 19:35:50.438127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:57.248 [2024-12-05 19:35:50.438339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.248 "name": "raid_bdev1", 00:15:57.248 "uuid": "23b8f5b8-a135-4044-bbc2-3ce1f61e5511", 00:15:57.248 "strip_size_kb": 0, 00:15:57.248 "state": "online", 00:15:57.248 "raid_level": "raid1", 00:15:57.248 "superblock": true, 00:15:57.248 "num_base_bdevs": 4, 00:15:57.248 "num_base_bdevs_discovered": 4, 00:15:57.248 "num_base_bdevs_operational": 4, 00:15:57.248 "base_bdevs_list": [ 00:15:57.248 { 00:15:57.248 "name": "BaseBdev1", 00:15:57.248 "uuid": "e72e8c5f-e6c6-551e-8bef-54628f80543e", 00:15:57.248 "is_configured": true, 00:15:57.248 "data_offset": 2048, 00:15:57.248 "data_size": 63488 00:15:57.248 }, 00:15:57.248 { 00:15:57.248 "name": "BaseBdev2", 00:15:57.248 "uuid": "b1908d7c-3eb4-537c-bf0d-28e6475d86e9", 00:15:57.248 "is_configured": true, 00:15:57.248 "data_offset": 2048, 00:15:57.248 "data_size": 63488 00:15:57.248 }, 00:15:57.248 { 00:15:57.248 "name": "BaseBdev3", 00:15:57.248 "uuid": "1a4a9dd2-05ea-5fca-9f7f-aab46484225a", 00:15:57.248 "is_configured": true, 00:15:57.248 "data_offset": 2048, 00:15:57.248 "data_size": 63488 00:15:57.248 }, 00:15:57.248 { 00:15:57.248 "name": "BaseBdev4", 00:15:57.248 "uuid": "6345551c-5e98-506d-a6f2-a6d2a8595673", 00:15:57.248 "is_configured": true, 00:15:57.248 "data_offset": 2048, 00:15:57.248 "data_size": 63488 00:15:57.248 } 00:15:57.248 ] 00:15:57.248 }' 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.248 19:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.506 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:57.506 19:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:57.765 [2024-12-05 19:35:51.028072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.701 [2024-12-05 19:35:51.932918] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:58.701 [2024-12-05 19:35:51.932981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.701 [2024-12-05 19:35:51.933270] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.701 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.701 "name": "raid_bdev1", 00:15:58.701 "uuid": "23b8f5b8-a135-4044-bbc2-3ce1f61e5511", 00:15:58.701 "strip_size_kb": 0, 00:15:58.701 "state": "online", 00:15:58.701 "raid_level": "raid1", 00:15:58.701 "superblock": true, 00:15:58.701 "num_base_bdevs": 4, 00:15:58.701 "num_base_bdevs_discovered": 3, 00:15:58.701 "num_base_bdevs_operational": 3, 00:15:58.701 "base_bdevs_list": [ 00:15:58.701 { 00:15:58.701 "name": null, 00:15:58.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.701 "is_configured": false, 00:15:58.702 "data_offset": 0, 00:15:58.702 "data_size": 63488 00:15:58.702 }, 00:15:58.702 { 00:15:58.702 "name": "BaseBdev2", 00:15:58.702 "uuid": "b1908d7c-3eb4-537c-bf0d-28e6475d86e9", 00:15:58.702 "is_configured": true, 00:15:58.702 "data_offset": 2048, 00:15:58.702 "data_size": 63488 00:15:58.702 }, 00:15:58.702 { 00:15:58.702 "name": "BaseBdev3", 00:15:58.702 "uuid": "1a4a9dd2-05ea-5fca-9f7f-aab46484225a", 00:15:58.702 "is_configured": true, 00:15:58.702 "data_offset": 2048, 00:15:58.702 "data_size": 63488 00:15:58.702 }, 00:15:58.702 { 00:15:58.702 "name": "BaseBdev4", 00:15:58.702 "uuid": "6345551c-5e98-506d-a6f2-a6d2a8595673", 00:15:58.702 "is_configured": true, 00:15:58.702 "data_offset": 2048, 00:15:58.702 "data_size": 63488 00:15:58.702 } 00:15:58.702 ] 00:15:58.702 }' 00:15:58.702 19:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.702 19:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.268 [2024-12-05 19:35:52.480725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.268 [2024-12-05 19:35:52.480943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.268 [2024-12-05 19:35:52.484223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.268 [2024-12-05 19:35:52.484413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.268 [2024-12-05 19:35:52.484561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.268 [2024-12-05 19:35:52.484580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:59.268 { 00:15:59.268 "results": [ 00:15:59.268 { 00:15:59.268 "job": "raid_bdev1", 00:15:59.268 "core_mask": "0x1", 00:15:59.268 "workload": "randrw", 00:15:59.268 "percentage": 50, 00:15:59.268 "status": "finished", 00:15:59.268 "queue_depth": 1, 00:15:59.268 "io_size": 131072, 00:15:59.268 "runtime": 1.4504, 00:15:59.268 "iops": 8138.444567015996, 00:15:59.268 "mibps": 1017.3055708769995, 00:15:59.268 "io_failed": 0, 00:15:59.268 "io_timeout": 0, 00:15:59.268 "avg_latency_us": 118.60207880225501, 00:15:59.268 "min_latency_us": 38.63272727272727, 00:15:59.268 "max_latency_us": 1757.5563636363636 00:15:59.268 } 00:15:59.268 ], 00:15:59.268 "core_count": 1 00:15:59.268 } 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75318 00:15:59.268 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75318 ']' 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75318 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75318 00:15:59.269 killing process with pid 75318 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75318' 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75318 00:15:59.269 [2024-12-05 19:35:52.519953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.269 19:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75318 00:15:59.527 [2024-12-05 19:35:52.799641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hgQLIk3VUV 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.462 ************************************ 00:16:00.462 END TEST raid_write_error_test 00:16:00.462 ************************************ 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:00.462 00:16:00.462 real 0m4.795s 00:16:00.462 user 0m5.911s 00:16:00.462 sys 0m0.602s 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.462 19:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.720 19:35:53 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:00.720 19:35:53 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:00.720 19:35:53 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:00.720 19:35:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:00.720 19:35:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.720 19:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.720 ************************************ 00:16:00.720 START TEST raid_rebuild_test 00:16:00.720 ************************************ 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75462 00:16:00.720 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75462 00:16:00.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75462 ']' 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.721 19:35:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.721 [2024-12-05 19:35:54.046627] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:00.721 [2024-12-05 19:35:54.047030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75462 ] 00:16:00.721 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:00.721 Zero copy mechanism will not be used. 00:16:00.979 [2024-12-05 19:35:54.233433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.979 [2024-12-05 19:35:54.354902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.237 [2024-12-05 19:35:54.560273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.237 [2024-12-05 19:35:54.560556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 BaseBdev1_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 [2024-12-05 19:35:55.066638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:01.804 [2024-12-05 19:35:55.066755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.804 [2024-12-05 19:35:55.066789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.804 [2024-12-05 19:35:55.066809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.804 [2024-12-05 19:35:55.069654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.804 [2024-12-05 19:35:55.069745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.804 BaseBdev1 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 BaseBdev2_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 [2024-12-05 19:35:55.122859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:01.804 [2024-12-05 19:35:55.122937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.804 [2024-12-05 19:35:55.122971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.804 [2024-12-05 19:35:55.122990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.804 [2024-12-05 19:35:55.125793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.804 [2024-12-05 19:35:55.125839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:01.804 BaseBdev2 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 spare_malloc 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 spare_delay 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 [2024-12-05 19:35:55.196409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.804 [2024-12-05 19:35:55.196520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.804 [2024-12-05 19:35:55.196564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:01.804 [2024-12-05 19:35:55.196581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.804 [2024-12-05 19:35:55.199610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.804 [2024-12-05 19:35:55.199681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.804 spare 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:01.804 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.805 [2024-12-05 19:35:55.204619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.805 [2024-12-05 19:35:55.207291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.805 [2024-12-05 19:35:55.207611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:01.805 [2024-12-05 19:35:55.207642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:01.805 [2024-12-05 19:35:55.208021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:01.805 [2024-12-05 19:35:55.208278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:01.805 [2024-12-05 19:35:55.208296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:01.805 [2024-12-05 19:35:55.208550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.805 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.063 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.063 "name": "raid_bdev1", 00:16:02.063 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:02.063 "strip_size_kb": 0, 00:16:02.063 "state": "online", 00:16:02.063 "raid_level": "raid1", 00:16:02.063 "superblock": false, 00:16:02.063 "num_base_bdevs": 2, 00:16:02.063 "num_base_bdevs_discovered": 2, 00:16:02.063 "num_base_bdevs_operational": 2, 00:16:02.063 "base_bdevs_list": [ 00:16:02.063 { 00:16:02.063 "name": "BaseBdev1", 00:16:02.063 "uuid": "8e834e99-3cf4-5a9b-9d8e-6e1000aa7ca9", 00:16:02.063 "is_configured": true, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 }, 00:16:02.063 { 00:16:02.063 "name": "BaseBdev2", 00:16:02.063 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:02.063 "is_configured": true, 00:16:02.063 "data_offset": 0, 00:16:02.063 "data_size": 65536 00:16:02.063 } 00:16:02.063 ] 00:16:02.063 }' 00:16:02.063 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.063 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.335 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.335 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.336 [2024-12-05 19:35:55.721193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.336 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.594 19:35:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:02.853 [2024-12-05 19:35:56.065016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:02.853 /dev/nbd0 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.853 1+0 records in 00:16:02.853 1+0 records out 00:16:02.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049828 s, 8.2 MB/s 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:02.853 19:35:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:10.969 65536+0 records in 00:16:10.969 65536+0 records out 00:16:10.969 33554432 bytes (34 MB, 32 MiB) copied, 7.12194 s, 4.7 MB/s 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.969 [2024-12-05 19:36:03.523502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 [2024-12-05 19:36:03.555552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.969 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.969 "name": "raid_bdev1", 00:16:10.970 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:10.970 "strip_size_kb": 0, 00:16:10.970 "state": "online", 00:16:10.970 "raid_level": "raid1", 00:16:10.970 "superblock": false, 00:16:10.970 "num_base_bdevs": 2, 00:16:10.970 "num_base_bdevs_discovered": 1, 00:16:10.970 "num_base_bdevs_operational": 1, 00:16:10.970 "base_bdevs_list": [ 00:16:10.970 { 00:16:10.970 "name": null, 00:16:10.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.970 "is_configured": false, 00:16:10.970 "data_offset": 0, 00:16:10.970 "data_size": 65536 00:16:10.970 }, 00:16:10.970 { 00:16:10.970 "name": "BaseBdev2", 00:16:10.970 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:10.970 "is_configured": true, 00:16:10.970 "data_offset": 0, 00:16:10.970 "data_size": 65536 00:16:10.970 } 00:16:10.970 ] 00:16:10.970 }' 00:16:10.970 19:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.970 19:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.970 19:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.970 19:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.970 19:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.970 [2024-12-05 19:36:04.067848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.970 [2024-12-05 19:36:04.084472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:10.970 19:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.970 19:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.970 [2024-12-05 19:36:04.087004] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.905 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.905 "name": "raid_bdev1", 00:16:11.905 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:11.905 "strip_size_kb": 0, 00:16:11.905 "state": "online", 00:16:11.905 "raid_level": "raid1", 00:16:11.905 "superblock": false, 00:16:11.905 "num_base_bdevs": 2, 00:16:11.905 "num_base_bdevs_discovered": 2, 00:16:11.905 "num_base_bdevs_operational": 2, 00:16:11.905 "process": { 00:16:11.905 "type": "rebuild", 00:16:11.905 "target": "spare", 00:16:11.905 "progress": { 00:16:11.906 "blocks": 18432, 00:16:11.906 "percent": 28 00:16:11.906 } 00:16:11.906 }, 00:16:11.906 "base_bdevs_list": [ 00:16:11.906 { 00:16:11.906 "name": "spare", 00:16:11.906 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:11.906 "is_configured": true, 00:16:11.906 "data_offset": 0, 00:16:11.906 "data_size": 65536 00:16:11.906 }, 00:16:11.906 { 00:16:11.906 "name": "BaseBdev2", 00:16:11.906 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:11.906 "is_configured": true, 00:16:11.906 "data_offset": 0, 00:16:11.906 "data_size": 65536 00:16:11.906 } 00:16:11.906 ] 00:16:11.906 }' 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.906 [2024-12-05 19:36:05.253444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.906 [2024-12-05 19:36:05.299596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.906 [2024-12-05 19:36:05.299761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.906 [2024-12-05 19:36:05.299791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.906 [2024-12-05 19:36:05.299812] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.906 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.163 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.163 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.163 "name": "raid_bdev1", 00:16:12.163 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:12.163 "strip_size_kb": 0, 00:16:12.163 "state": "online", 00:16:12.163 "raid_level": "raid1", 00:16:12.163 "superblock": false, 00:16:12.163 "num_base_bdevs": 2, 00:16:12.163 "num_base_bdevs_discovered": 1, 00:16:12.163 "num_base_bdevs_operational": 1, 00:16:12.163 "base_bdevs_list": [ 00:16:12.163 { 00:16:12.163 "name": null, 00:16:12.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.163 "is_configured": false, 00:16:12.163 "data_offset": 0, 00:16:12.163 "data_size": 65536 00:16:12.163 }, 00:16:12.163 { 00:16:12.163 "name": "BaseBdev2", 00:16:12.163 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:12.163 "is_configured": true, 00:16:12.163 "data_offset": 0, 00:16:12.163 "data_size": 65536 00:16:12.163 } 00:16:12.163 ] 00:16:12.163 }' 00:16:12.163 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.163 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.421 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.422 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.422 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.681 "name": "raid_bdev1", 00:16:12.681 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:12.681 "strip_size_kb": 0, 00:16:12.681 "state": "online", 00:16:12.681 "raid_level": "raid1", 00:16:12.681 "superblock": false, 00:16:12.681 "num_base_bdevs": 2, 00:16:12.681 "num_base_bdevs_discovered": 1, 00:16:12.681 "num_base_bdevs_operational": 1, 00:16:12.681 "base_bdevs_list": [ 00:16:12.681 { 00:16:12.681 "name": null, 00:16:12.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.681 "is_configured": false, 00:16:12.681 "data_offset": 0, 00:16:12.681 "data_size": 65536 00:16:12.681 }, 00:16:12.681 { 00:16:12.681 "name": "BaseBdev2", 00:16:12.681 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:12.681 "is_configured": true, 00:16:12.681 "data_offset": 0, 00:16:12.681 "data_size": 65536 00:16:12.681 } 00:16:12.681 ] 00:16:12.681 }' 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.681 [2024-12-05 19:36:05.982652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.681 [2024-12-05 19:36:05.997772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.681 19:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.681 [2024-12-05 19:36:06.000226] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.617 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.617 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.617 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.618 "name": "raid_bdev1", 00:16:13.618 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:13.618 "strip_size_kb": 0, 00:16:13.618 "state": "online", 00:16:13.618 "raid_level": "raid1", 00:16:13.618 "superblock": false, 00:16:13.618 "num_base_bdevs": 2, 00:16:13.618 "num_base_bdevs_discovered": 2, 00:16:13.618 "num_base_bdevs_operational": 2, 00:16:13.618 "process": { 00:16:13.618 "type": "rebuild", 00:16:13.618 "target": "spare", 00:16:13.618 "progress": { 00:16:13.618 "blocks": 18432, 00:16:13.618 "percent": 28 00:16:13.618 } 00:16:13.618 }, 00:16:13.618 "base_bdevs_list": [ 00:16:13.618 { 00:16:13.618 "name": "spare", 00:16:13.618 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:13.618 "is_configured": true, 00:16:13.618 "data_offset": 0, 00:16:13.618 "data_size": 65536 00:16:13.618 }, 00:16:13.618 { 00:16:13.618 "name": "BaseBdev2", 00:16:13.618 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:13.618 "is_configured": true, 00:16:13.618 "data_offset": 0, 00:16:13.618 "data_size": 65536 00:16:13.618 } 00:16:13.618 ] 00:16:13.618 }' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.877 "name": "raid_bdev1", 00:16:13.877 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:13.877 "strip_size_kb": 0, 00:16:13.877 "state": "online", 00:16:13.877 "raid_level": "raid1", 00:16:13.877 "superblock": false, 00:16:13.877 "num_base_bdevs": 2, 00:16:13.877 "num_base_bdevs_discovered": 2, 00:16:13.877 "num_base_bdevs_operational": 2, 00:16:13.877 "process": { 00:16:13.877 "type": "rebuild", 00:16:13.877 "target": "spare", 00:16:13.877 "progress": { 00:16:13.877 "blocks": 22528, 00:16:13.877 "percent": 34 00:16:13.877 } 00:16:13.877 }, 00:16:13.877 "base_bdevs_list": [ 00:16:13.877 { 00:16:13.877 "name": "spare", 00:16:13.877 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:13.877 "is_configured": true, 00:16:13.877 "data_offset": 0, 00:16:13.877 "data_size": 65536 00:16:13.877 }, 00:16:13.877 { 00:16:13.877 "name": "BaseBdev2", 00:16:13.877 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:13.877 "is_configured": true, 00:16:13.877 "data_offset": 0, 00:16:13.877 "data_size": 65536 00:16:13.877 } 00:16:13.877 ] 00:16:13.877 }' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.877 19:36:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.278 "name": "raid_bdev1", 00:16:15.278 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:15.278 "strip_size_kb": 0, 00:16:15.278 "state": "online", 00:16:15.278 "raid_level": "raid1", 00:16:15.278 "superblock": false, 00:16:15.278 "num_base_bdevs": 2, 00:16:15.278 "num_base_bdevs_discovered": 2, 00:16:15.278 "num_base_bdevs_operational": 2, 00:16:15.278 "process": { 00:16:15.278 "type": "rebuild", 00:16:15.278 "target": "spare", 00:16:15.278 "progress": { 00:16:15.278 "blocks": 47104, 00:16:15.278 "percent": 71 00:16:15.278 } 00:16:15.278 }, 00:16:15.278 "base_bdevs_list": [ 00:16:15.278 { 00:16:15.278 "name": "spare", 00:16:15.278 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:15.278 "is_configured": true, 00:16:15.278 "data_offset": 0, 00:16:15.278 "data_size": 65536 00:16:15.278 }, 00:16:15.278 { 00:16:15.278 "name": "BaseBdev2", 00:16:15.278 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:15.278 "is_configured": true, 00:16:15.278 "data_offset": 0, 00:16:15.278 "data_size": 65536 00:16:15.278 } 00:16:15.278 ] 00:16:15.278 }' 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.278 19:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.846 [2024-12-05 19:36:09.232620] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:15.846 [2024-12-05 19:36:09.232777] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:15.846 [2024-12-05 19:36:09.232860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.104 "name": "raid_bdev1", 00:16:16.104 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:16.104 "strip_size_kb": 0, 00:16:16.104 "state": "online", 00:16:16.104 "raid_level": "raid1", 00:16:16.104 "superblock": false, 00:16:16.104 "num_base_bdevs": 2, 00:16:16.104 "num_base_bdevs_discovered": 2, 00:16:16.104 "num_base_bdevs_operational": 2, 00:16:16.104 "base_bdevs_list": [ 00:16:16.104 { 00:16:16.104 "name": "spare", 00:16:16.104 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:16.104 "is_configured": true, 00:16:16.104 "data_offset": 0, 00:16:16.104 "data_size": 65536 00:16:16.104 }, 00:16:16.104 { 00:16:16.104 "name": "BaseBdev2", 00:16:16.104 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:16.104 "is_configured": true, 00:16:16.104 "data_offset": 0, 00:16:16.104 "data_size": 65536 00:16:16.104 } 00:16:16.104 ] 00:16:16.104 }' 00:16:16.104 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.362 "name": "raid_bdev1", 00:16:16.362 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:16.362 "strip_size_kb": 0, 00:16:16.362 "state": "online", 00:16:16.362 "raid_level": "raid1", 00:16:16.362 "superblock": false, 00:16:16.362 "num_base_bdevs": 2, 00:16:16.362 "num_base_bdevs_discovered": 2, 00:16:16.362 "num_base_bdevs_operational": 2, 00:16:16.362 "base_bdevs_list": [ 00:16:16.362 { 00:16:16.362 "name": "spare", 00:16:16.362 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:16.362 "is_configured": true, 00:16:16.362 "data_offset": 0, 00:16:16.362 "data_size": 65536 00:16:16.362 }, 00:16:16.362 { 00:16:16.362 "name": "BaseBdev2", 00:16:16.362 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:16.362 "is_configured": true, 00:16:16.362 "data_offset": 0, 00:16:16.362 "data_size": 65536 00:16:16.362 } 00:16:16.362 ] 00:16:16.362 }' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.362 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.620 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.620 "name": "raid_bdev1", 00:16:16.620 "uuid": "1442df57-fb80-4317-a530-93d87aad4462", 00:16:16.620 "strip_size_kb": 0, 00:16:16.620 "state": "online", 00:16:16.621 "raid_level": "raid1", 00:16:16.621 "superblock": false, 00:16:16.621 "num_base_bdevs": 2, 00:16:16.621 "num_base_bdevs_discovered": 2, 00:16:16.621 "num_base_bdevs_operational": 2, 00:16:16.621 "base_bdevs_list": [ 00:16:16.621 { 00:16:16.621 "name": "spare", 00:16:16.621 "uuid": "dc4764df-0e61-5b92-bd8b-ee8746250eaf", 00:16:16.621 "is_configured": true, 00:16:16.621 "data_offset": 0, 00:16:16.621 "data_size": 65536 00:16:16.621 }, 00:16:16.621 { 00:16:16.621 "name": "BaseBdev2", 00:16:16.621 "uuid": "e9147cf8-41e0-55ac-95e3-f1d9b95e35e1", 00:16:16.621 "is_configured": true, 00:16:16.621 "data_offset": 0, 00:16:16.621 "data_size": 65536 00:16:16.621 } 00:16:16.621 ] 00:16:16.621 }' 00:16:16.621 19:36:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.621 19:36:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.972 [2024-12-05 19:36:10.317623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.972 [2024-12-05 19:36:10.317704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.972 [2024-12-05 19:36:10.317862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.972 [2024-12-05 19:36:10.317976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.972 [2024-12-05 19:36:10.317997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.972 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:17.538 /dev/nbd0 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.538 1+0 records in 00:16:17.538 1+0 records out 00:16:17.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737077 s, 5.6 MB/s 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.538 19:36:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:17.797 /dev/nbd1 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.797 1+0 records in 00:16:17.797 1+0 records out 00:16:17.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394176 s, 10.4 MB/s 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.797 19:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.055 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.314 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75462 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75462 ']' 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75462 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75462 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.573 killing process with pid 75462 00:16:18.573 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.573 00:16:18.573 Latency(us) 00:16:18.573 [2024-12-05T19:36:12.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.573 [2024-12-05T19:36:12.014Z] =================================================================================================================== 00:16:18.573 [2024-12-05T19:36:12.014Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75462' 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75462 00:16:18.573 19:36:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75462 00:16:18.573 [2024-12-05 19:36:11.851231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.832 [2024-12-05 19:36:12.120520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.770 19:36:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:19.770 ************************************ 00:16:19.770 END TEST raid_rebuild_test 00:16:19.770 ************************************ 00:16:19.770 00:16:19.770 real 0m19.264s 00:16:19.770 user 0m20.909s 00:16:19.770 sys 0m3.754s 00:16:19.770 19:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.770 19:36:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.030 19:36:13 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:16:20.030 19:36:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:20.030 19:36:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.030 19:36:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.030 ************************************ 00:16:20.030 START TEST raid_rebuild_test_sb 00:16:20.030 ************************************ 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75924 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75924 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75924 ']' 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.030 19:36:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.030 [2024-12-05 19:36:13.361139] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:20.030 [2024-12-05 19:36:13.361634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75924 ] 00:16:20.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:20.030 Zero copy mechanism will not be used. 00:16:20.290 [2024-12-05 19:36:13.537019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.290 [2024-12-05 19:36:13.684681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.549 [2024-12-05 19:36:13.908874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.549 [2024-12-05 19:36:13.909183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 BaseBdev1_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-12-05 19:36:14.388794] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.117 [2024-12-05 19:36:14.388898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.117 [2024-12-05 19:36:14.388936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:21.117 [2024-12-05 19:36:14.388958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.117 [2024-12-05 19:36:14.391761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.117 [2024-12-05 19:36:14.392101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.117 BaseBdev1 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 BaseBdev2_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-12-05 19:36:14.439055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:21.117 [2024-12-05 19:36:14.439199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.117 [2024-12-05 19:36:14.439242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:21.117 [2024-12-05 19:36:14.439276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.117 [2024-12-05 19:36:14.442588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.117 [2024-12-05 19:36:14.442643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:21.117 BaseBdev2 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 spare_malloc 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 spare_delay 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-12-05 19:36:14.514096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.117 [2024-12-05 19:36:14.514241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.117 [2024-12-05 19:36:14.514276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:21.117 [2024-12-05 19:36:14.514298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.117 [2024-12-05 19:36:14.517095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.117 [2024-12-05 19:36:14.517149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.117 spare 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 [2024-12-05 19:36:14.522189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.117 [2024-12-05 19:36:14.524548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.117 [2024-12-05 19:36:14.524820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:21.117 [2024-12-05 19:36:14.524847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.117 [2024-12-05 19:36:14.525139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:21.117 [2024-12-05 19:36:14.525375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:21.117 [2024-12-05 19:36:14.525393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:21.117 [2024-12-05 19:36:14.525575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.117 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.376 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.376 "name": "raid_bdev1", 00:16:21.376 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:21.376 "strip_size_kb": 0, 00:16:21.376 "state": "online", 00:16:21.376 "raid_level": "raid1", 00:16:21.376 "superblock": true, 00:16:21.376 "num_base_bdevs": 2, 00:16:21.376 "num_base_bdevs_discovered": 2, 00:16:21.376 "num_base_bdevs_operational": 2, 00:16:21.376 "base_bdevs_list": [ 00:16:21.376 { 00:16:21.376 "name": "BaseBdev1", 00:16:21.376 "uuid": "e544579f-6fec-53b6-9eb0-ae8076fb275c", 00:16:21.376 "is_configured": true, 00:16:21.376 "data_offset": 2048, 00:16:21.376 "data_size": 63488 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "name": "BaseBdev2", 00:16:21.376 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:21.376 "is_configured": true, 00:16:21.376 "data_offset": 2048, 00:16:21.377 "data_size": 63488 00:16:21.377 } 00:16:21.377 ] 00:16:21.377 }' 00:16:21.377 19:36:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.377 19:36:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.636 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.636 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:21.636 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.636 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.636 [2024-12-05 19:36:15.050841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.636 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.892 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:21.893 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:22.150 [2024-12-05 19:36:15.470713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:22.150 /dev/nbd0 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.150 1+0 records in 00:16:22.150 1+0 records out 00:16:22.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003577 s, 11.5 MB/s 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:22.150 19:36:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:30.350 63488+0 records in 00:16:30.350 63488+0 records out 00:16:30.350 32505856 bytes (33 MB, 31 MiB) copied, 7.35509 s, 4.4 MB/s 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.350 19:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:30.350 [2024-12-05 19:36:23.159910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.350 [2024-12-05 19:36:23.191979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.350 "name": "raid_bdev1", 00:16:30.350 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:30.350 "strip_size_kb": 0, 00:16:30.350 "state": "online", 00:16:30.350 "raid_level": "raid1", 00:16:30.350 "superblock": true, 00:16:30.350 "num_base_bdevs": 2, 00:16:30.350 "num_base_bdevs_discovered": 1, 00:16:30.350 "num_base_bdevs_operational": 1, 00:16:30.350 "base_bdevs_list": [ 00:16:30.350 { 00:16:30.350 "name": null, 00:16:30.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.350 "is_configured": false, 00:16:30.350 "data_offset": 0, 00:16:30.350 "data_size": 63488 00:16:30.350 }, 00:16:30.350 { 00:16:30.350 "name": "BaseBdev2", 00:16:30.350 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:30.350 "is_configured": true, 00:16:30.350 "data_offset": 2048, 00:16:30.350 "data_size": 63488 00:16:30.350 } 00:16:30.350 ] 00:16:30.350 }' 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.350 [2024-12-05 19:36:23.720265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.350 [2024-12-05 19:36:23.737167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.350 19:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:30.350 [2024-12-05 19:36:23.739817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.725 "name": "raid_bdev1", 00:16:31.725 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:31.725 "strip_size_kb": 0, 00:16:31.725 "state": "online", 00:16:31.725 "raid_level": "raid1", 00:16:31.725 "superblock": true, 00:16:31.725 "num_base_bdevs": 2, 00:16:31.725 "num_base_bdevs_discovered": 2, 00:16:31.725 "num_base_bdevs_operational": 2, 00:16:31.725 "process": { 00:16:31.725 "type": "rebuild", 00:16:31.725 "target": "spare", 00:16:31.725 "progress": { 00:16:31.725 "blocks": 20480, 00:16:31.725 "percent": 32 00:16:31.725 } 00:16:31.725 }, 00:16:31.725 "base_bdevs_list": [ 00:16:31.725 { 00:16:31.725 "name": "spare", 00:16:31.725 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:31.725 "is_configured": true, 00:16:31.725 "data_offset": 2048, 00:16:31.725 "data_size": 63488 00:16:31.725 }, 00:16:31.725 { 00:16:31.725 "name": "BaseBdev2", 00:16:31.725 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:31.725 "is_configured": true, 00:16:31.725 "data_offset": 2048, 00:16:31.725 "data_size": 63488 00:16:31.725 } 00:16:31.725 ] 00:16:31.725 }' 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.725 [2024-12-05 19:36:24.902049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.725 [2024-12-05 19:36:24.952168] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.725 [2024-12-05 19:36:24.952288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.725 [2024-12-05 19:36:24.952317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.725 [2024-12-05 19:36:24.952343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.725 19:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.726 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.726 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.726 "name": "raid_bdev1", 00:16:31.726 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:31.726 "strip_size_kb": 0, 00:16:31.726 "state": "online", 00:16:31.726 "raid_level": "raid1", 00:16:31.726 "superblock": true, 00:16:31.726 "num_base_bdevs": 2, 00:16:31.726 "num_base_bdevs_discovered": 1, 00:16:31.726 "num_base_bdevs_operational": 1, 00:16:31.726 "base_bdevs_list": [ 00:16:31.726 { 00:16:31.726 "name": null, 00:16:31.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.726 "is_configured": false, 00:16:31.726 "data_offset": 0, 00:16:31.726 "data_size": 63488 00:16:31.726 }, 00:16:31.726 { 00:16:31.726 "name": "BaseBdev2", 00:16:31.726 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:31.726 "is_configured": true, 00:16:31.726 "data_offset": 2048, 00:16:31.726 "data_size": 63488 00:16:31.726 } 00:16:31.726 ] 00:16:31.726 }' 00:16:31.726 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.726 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.293 "name": "raid_bdev1", 00:16:32.293 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:32.293 "strip_size_kb": 0, 00:16:32.293 "state": "online", 00:16:32.293 "raid_level": "raid1", 00:16:32.293 "superblock": true, 00:16:32.293 "num_base_bdevs": 2, 00:16:32.293 "num_base_bdevs_discovered": 1, 00:16:32.293 "num_base_bdevs_operational": 1, 00:16:32.293 "base_bdevs_list": [ 00:16:32.293 { 00:16:32.293 "name": null, 00:16:32.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.293 "is_configured": false, 00:16:32.293 "data_offset": 0, 00:16:32.293 "data_size": 63488 00:16:32.293 }, 00:16:32.293 { 00:16:32.293 "name": "BaseBdev2", 00:16:32.293 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:32.293 "is_configured": true, 00:16:32.293 "data_offset": 2048, 00:16:32.293 "data_size": 63488 00:16:32.293 } 00:16:32.293 ] 00:16:32.293 }' 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.293 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.294 [2024-12-05 19:36:25.682414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.294 [2024-12-05 19:36:25.700994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.294 19:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:32.294 [2024-12-05 19:36:25.704192] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.271 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.529 "name": "raid_bdev1", 00:16:33.529 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:33.529 "strip_size_kb": 0, 00:16:33.529 "state": "online", 00:16:33.529 "raid_level": "raid1", 00:16:33.529 "superblock": true, 00:16:33.529 "num_base_bdevs": 2, 00:16:33.529 "num_base_bdevs_discovered": 2, 00:16:33.529 "num_base_bdevs_operational": 2, 00:16:33.529 "process": { 00:16:33.529 "type": "rebuild", 00:16:33.529 "target": "spare", 00:16:33.529 "progress": { 00:16:33.529 "blocks": 20480, 00:16:33.529 "percent": 32 00:16:33.529 } 00:16:33.529 }, 00:16:33.529 "base_bdevs_list": [ 00:16:33.529 { 00:16:33.529 "name": "spare", 00:16:33.529 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:33.529 "is_configured": true, 00:16:33.529 "data_offset": 2048, 00:16:33.529 "data_size": 63488 00:16:33.529 }, 00:16:33.529 { 00:16:33.529 "name": "BaseBdev2", 00:16:33.529 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:33.529 "is_configured": true, 00:16:33.529 "data_offset": 2048, 00:16:33.529 "data_size": 63488 00:16:33.529 } 00:16:33.529 ] 00:16:33.529 }' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:33.529 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=420 00:16:33.529 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.530 "name": "raid_bdev1", 00:16:33.530 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:33.530 "strip_size_kb": 0, 00:16:33.530 "state": "online", 00:16:33.530 "raid_level": "raid1", 00:16:33.530 "superblock": true, 00:16:33.530 "num_base_bdevs": 2, 00:16:33.530 "num_base_bdevs_discovered": 2, 00:16:33.530 "num_base_bdevs_operational": 2, 00:16:33.530 "process": { 00:16:33.530 "type": "rebuild", 00:16:33.530 "target": "spare", 00:16:33.530 "progress": { 00:16:33.530 "blocks": 22528, 00:16:33.530 "percent": 35 00:16:33.530 } 00:16:33.530 }, 00:16:33.530 "base_bdevs_list": [ 00:16:33.530 { 00:16:33.530 "name": "spare", 00:16:33.530 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:33.530 "is_configured": true, 00:16:33.530 "data_offset": 2048, 00:16:33.530 "data_size": 63488 00:16:33.530 }, 00:16:33.530 { 00:16:33.530 "name": "BaseBdev2", 00:16:33.530 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:33.530 "is_configured": true, 00:16:33.530 "data_offset": 2048, 00:16:33.530 "data_size": 63488 00:16:33.530 } 00:16:33.530 ] 00:16:33.530 }' 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.530 19:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.788 19:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.788 19:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.724 "name": "raid_bdev1", 00:16:34.724 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:34.724 "strip_size_kb": 0, 00:16:34.724 "state": "online", 00:16:34.724 "raid_level": "raid1", 00:16:34.724 "superblock": true, 00:16:34.724 "num_base_bdevs": 2, 00:16:34.724 "num_base_bdevs_discovered": 2, 00:16:34.724 "num_base_bdevs_operational": 2, 00:16:34.724 "process": { 00:16:34.724 "type": "rebuild", 00:16:34.724 "target": "spare", 00:16:34.724 "progress": { 00:16:34.724 "blocks": 47104, 00:16:34.724 "percent": 74 00:16:34.724 } 00:16:34.724 }, 00:16:34.724 "base_bdevs_list": [ 00:16:34.724 { 00:16:34.724 "name": "spare", 00:16:34.724 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:34.724 "is_configured": true, 00:16:34.724 "data_offset": 2048, 00:16:34.724 "data_size": 63488 00:16:34.724 }, 00:16:34.724 { 00:16:34.724 "name": "BaseBdev2", 00:16:34.724 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:34.724 "is_configured": true, 00:16:34.724 "data_offset": 2048, 00:16:34.724 "data_size": 63488 00:16:34.724 } 00:16:34.724 ] 00:16:34.724 }' 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.724 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.982 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.982 19:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.548 [2024-12-05 19:36:28.828058] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:35.548 [2024-12-05 19:36:28.828212] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:35.548 [2024-12-05 19:36:28.828342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.808 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.808 "name": "raid_bdev1", 00:16:35.808 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:35.808 "strip_size_kb": 0, 00:16:35.808 "state": "online", 00:16:35.808 "raid_level": "raid1", 00:16:35.808 "superblock": true, 00:16:35.808 "num_base_bdevs": 2, 00:16:35.808 "num_base_bdevs_discovered": 2, 00:16:35.808 "num_base_bdevs_operational": 2, 00:16:35.808 "base_bdevs_list": [ 00:16:35.808 { 00:16:35.808 "name": "spare", 00:16:35.808 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:35.808 "is_configured": true, 00:16:35.808 "data_offset": 2048, 00:16:35.808 "data_size": 63488 00:16:35.808 }, 00:16:35.808 { 00:16:35.808 "name": "BaseBdev2", 00:16:35.808 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:35.808 "is_configured": true, 00:16:35.808 "data_offset": 2048, 00:16:35.808 "data_size": 63488 00:16:35.808 } 00:16:35.808 ] 00:16:35.808 }' 00:16:35.808 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.067 "name": "raid_bdev1", 00:16:36.067 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:36.067 "strip_size_kb": 0, 00:16:36.067 "state": "online", 00:16:36.067 "raid_level": "raid1", 00:16:36.067 "superblock": true, 00:16:36.067 "num_base_bdevs": 2, 00:16:36.067 "num_base_bdevs_discovered": 2, 00:16:36.067 "num_base_bdevs_operational": 2, 00:16:36.067 "base_bdevs_list": [ 00:16:36.067 { 00:16:36.067 "name": "spare", 00:16:36.067 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:36.067 "is_configured": true, 00:16:36.067 "data_offset": 2048, 00:16:36.067 "data_size": 63488 00:16:36.067 }, 00:16:36.067 { 00:16:36.067 "name": "BaseBdev2", 00:16:36.067 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:36.067 "is_configured": true, 00:16:36.067 "data_offset": 2048, 00:16:36.067 "data_size": 63488 00:16:36.067 } 00:16:36.067 ] 00:16:36.067 }' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.067 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.326 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.326 "name": "raid_bdev1", 00:16:36.326 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:36.326 "strip_size_kb": 0, 00:16:36.326 "state": "online", 00:16:36.326 "raid_level": "raid1", 00:16:36.326 "superblock": true, 00:16:36.326 "num_base_bdevs": 2, 00:16:36.326 "num_base_bdevs_discovered": 2, 00:16:36.326 "num_base_bdevs_operational": 2, 00:16:36.326 "base_bdevs_list": [ 00:16:36.326 { 00:16:36.326 "name": "spare", 00:16:36.326 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:36.326 "is_configured": true, 00:16:36.326 "data_offset": 2048, 00:16:36.326 "data_size": 63488 00:16:36.326 }, 00:16:36.326 { 00:16:36.326 "name": "BaseBdev2", 00:16:36.326 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:36.326 "is_configured": true, 00:16:36.326 "data_offset": 2048, 00:16:36.326 "data_size": 63488 00:16:36.326 } 00:16:36.326 ] 00:16:36.326 }' 00:16:36.326 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.326 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 [2024-12-05 19:36:29.984502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.585 [2024-12-05 19:36:29.984565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.585 [2024-12-05 19:36:29.984732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.585 [2024-12-05 19:36:29.984871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.585 [2024-12-05 19:36:29.984897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 19:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:36.585 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:36.844 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:37.102 /dev/nbd0 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.102 1+0 records in 00:16:37.102 1+0 records out 00:16:37.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264728 s, 15.5 MB/s 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.102 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:37.362 /dev/nbd1 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.362 1+0 records in 00:16:37.362 1+0 records out 00:16:37.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369483 s, 11.1 MB/s 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.362 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.625 19:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.890 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.150 [2024-12-05 19:36:31.502625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.150 [2024-12-05 19:36:31.502693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.150 [2024-12-05 19:36:31.502755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:38.150 [2024-12-05 19:36:31.502773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.150 [2024-12-05 19:36:31.505619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.150 [2024-12-05 19:36:31.505672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.150 [2024-12-05 19:36:31.505822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.150 [2024-12-05 19:36:31.505885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.150 [2024-12-05 19:36:31.506125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.150 spare 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.150 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.410 [2024-12-05 19:36:31.606273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:38.410 [2024-12-05 19:36:31.606323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.410 [2024-12-05 19:36:31.606684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:38.410 [2024-12-05 19:36:31.606913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:38.410 [2024-12-05 19:36:31.606944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:38.410 [2024-12-05 19:36:31.607140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.410 "name": "raid_bdev1", 00:16:38.410 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:38.410 "strip_size_kb": 0, 00:16:38.410 "state": "online", 00:16:38.410 "raid_level": "raid1", 00:16:38.410 "superblock": true, 00:16:38.410 "num_base_bdevs": 2, 00:16:38.410 "num_base_bdevs_discovered": 2, 00:16:38.410 "num_base_bdevs_operational": 2, 00:16:38.410 "base_bdevs_list": [ 00:16:38.410 { 00:16:38.410 "name": "spare", 00:16:38.410 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:38.410 "is_configured": true, 00:16:38.410 "data_offset": 2048, 00:16:38.410 "data_size": 63488 00:16:38.410 }, 00:16:38.410 { 00:16:38.410 "name": "BaseBdev2", 00:16:38.410 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:38.410 "is_configured": true, 00:16:38.410 "data_offset": 2048, 00:16:38.410 "data_size": 63488 00:16:38.410 } 00:16:38.410 ] 00:16:38.410 }' 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.410 19:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.978 "name": "raid_bdev1", 00:16:38.978 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:38.978 "strip_size_kb": 0, 00:16:38.978 "state": "online", 00:16:38.978 "raid_level": "raid1", 00:16:38.978 "superblock": true, 00:16:38.978 "num_base_bdevs": 2, 00:16:38.978 "num_base_bdevs_discovered": 2, 00:16:38.978 "num_base_bdevs_operational": 2, 00:16:38.978 "base_bdevs_list": [ 00:16:38.978 { 00:16:38.978 "name": "spare", 00:16:38.978 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:38.978 "is_configured": true, 00:16:38.978 "data_offset": 2048, 00:16:38.978 "data_size": 63488 00:16:38.978 }, 00:16:38.978 { 00:16:38.978 "name": "BaseBdev2", 00:16:38.978 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:38.978 "is_configured": true, 00:16:38.978 "data_offset": 2048, 00:16:38.978 "data_size": 63488 00:16:38.978 } 00:16:38.978 ] 00:16:38.978 }' 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.978 [2024-12-05 19:36:32.347457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.978 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.979 "name": "raid_bdev1", 00:16:38.979 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:38.979 "strip_size_kb": 0, 00:16:38.979 "state": "online", 00:16:38.979 "raid_level": "raid1", 00:16:38.979 "superblock": true, 00:16:38.979 "num_base_bdevs": 2, 00:16:38.979 "num_base_bdevs_discovered": 1, 00:16:38.979 "num_base_bdevs_operational": 1, 00:16:38.979 "base_bdevs_list": [ 00:16:38.979 { 00:16:38.979 "name": null, 00:16:38.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.979 "is_configured": false, 00:16:38.979 "data_offset": 0, 00:16:38.979 "data_size": 63488 00:16:38.979 }, 00:16:38.979 { 00:16:38.979 "name": "BaseBdev2", 00:16:38.979 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:38.979 "is_configured": true, 00:16:38.979 "data_offset": 2048, 00:16:38.979 "data_size": 63488 00:16:38.979 } 00:16:38.979 ] 00:16:38.979 }' 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.979 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.546 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.546 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.546 [2024-12-05 19:36:32.867640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.546 [2024-12-05 19:36:32.867939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.546 [2024-12-05 19:36:32.867977] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.546 [2024-12-05 19:36:32.868035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.546 [2024-12-05 19:36:32.883660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:39.546 19:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.546 19:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:39.546 [2024-12-05 19:36:32.886352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.482 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.482 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.482 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.483 19:36:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.742 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.742 "name": "raid_bdev1", 00:16:40.742 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:40.742 "strip_size_kb": 0, 00:16:40.742 "state": "online", 00:16:40.742 "raid_level": "raid1", 00:16:40.742 "superblock": true, 00:16:40.742 "num_base_bdevs": 2, 00:16:40.742 "num_base_bdevs_discovered": 2, 00:16:40.742 "num_base_bdevs_operational": 2, 00:16:40.742 "process": { 00:16:40.742 "type": "rebuild", 00:16:40.742 "target": "spare", 00:16:40.742 "progress": { 00:16:40.742 "blocks": 20480, 00:16:40.742 "percent": 32 00:16:40.742 } 00:16:40.742 }, 00:16:40.742 "base_bdevs_list": [ 00:16:40.742 { 00:16:40.742 "name": "spare", 00:16:40.742 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:40.742 "is_configured": true, 00:16:40.742 "data_offset": 2048, 00:16:40.742 "data_size": 63488 00:16:40.742 }, 00:16:40.742 { 00:16:40.742 "name": "BaseBdev2", 00:16:40.742 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:40.742 "is_configured": true, 00:16:40.742 "data_offset": 2048, 00:16:40.742 "data_size": 63488 00:16:40.742 } 00:16:40.742 ] 00:16:40.742 }' 00:16:40.742 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.742 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.742 19:36:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.742 [2024-12-05 19:36:34.044174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.742 [2024-12-05 19:36:34.095320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.742 [2024-12-05 19:36:34.095410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.742 [2024-12-05 19:36:34.095433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.742 [2024-12-05 19:36:34.095449] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.742 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.001 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.001 "name": "raid_bdev1", 00:16:41.001 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:41.001 "strip_size_kb": 0, 00:16:41.001 "state": "online", 00:16:41.001 "raid_level": "raid1", 00:16:41.001 "superblock": true, 00:16:41.001 "num_base_bdevs": 2, 00:16:41.001 "num_base_bdevs_discovered": 1, 00:16:41.001 "num_base_bdevs_operational": 1, 00:16:41.001 "base_bdevs_list": [ 00:16:41.001 { 00:16:41.001 "name": null, 00:16:41.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.001 "is_configured": false, 00:16:41.001 "data_offset": 0, 00:16:41.001 "data_size": 63488 00:16:41.001 }, 00:16:41.001 { 00:16:41.001 "name": "BaseBdev2", 00:16:41.001 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:41.001 "is_configured": true, 00:16:41.001 "data_offset": 2048, 00:16:41.001 "data_size": 63488 00:16:41.001 } 00:16:41.001 ] 00:16:41.001 }' 00:16:41.001 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.001 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.261 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.261 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.261 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.261 [2024-12-05 19:36:34.654986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.261 [2024-12-05 19:36:34.655111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.261 [2024-12-05 19:36:34.655143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:41.261 [2024-12-05 19:36:34.655176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.261 [2024-12-05 19:36:34.655829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.261 [2024-12-05 19:36:34.655868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.261 [2024-12-05 19:36:34.655989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.261 [2024-12-05 19:36:34.656024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.261 [2024-12-05 19:36:34.656043] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.261 [2024-12-05 19:36:34.656081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.261 [2024-12-05 19:36:34.671556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:41.261 spare 00:16:41.261 19:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.261 19:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:41.261 [2024-12-05 19:36:34.674104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.662 "name": "raid_bdev1", 00:16:42.662 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:42.662 "strip_size_kb": 0, 00:16:42.662 "state": "online", 00:16:42.662 "raid_level": "raid1", 00:16:42.662 "superblock": true, 00:16:42.662 "num_base_bdevs": 2, 00:16:42.662 "num_base_bdevs_discovered": 2, 00:16:42.662 "num_base_bdevs_operational": 2, 00:16:42.662 "process": { 00:16:42.662 "type": "rebuild", 00:16:42.662 "target": "spare", 00:16:42.662 "progress": { 00:16:42.662 "blocks": 20480, 00:16:42.662 "percent": 32 00:16:42.662 } 00:16:42.662 }, 00:16:42.662 "base_bdevs_list": [ 00:16:42.662 { 00:16:42.662 "name": "spare", 00:16:42.662 "uuid": "fcec3d4e-06c0-5da6-863c-7283c32b38c3", 00:16:42.662 "is_configured": true, 00:16:42.662 "data_offset": 2048, 00:16:42.662 "data_size": 63488 00:16:42.662 }, 00:16:42.662 { 00:16:42.662 "name": "BaseBdev2", 00:16:42.662 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:42.662 "is_configured": true, 00:16:42.662 "data_offset": 2048, 00:16:42.662 "data_size": 63488 00:16:42.662 } 00:16:42.662 ] 00:16:42.662 }' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.662 [2024-12-05 19:36:35.843655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.662 [2024-12-05 19:36:35.882786] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.662 [2024-12-05 19:36:35.882871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.662 [2024-12-05 19:36:35.882898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.662 [2024-12-05 19:36:35.882910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.662 "name": "raid_bdev1", 00:16:42.662 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:42.662 "strip_size_kb": 0, 00:16:42.662 "state": "online", 00:16:42.662 "raid_level": "raid1", 00:16:42.662 "superblock": true, 00:16:42.662 "num_base_bdevs": 2, 00:16:42.662 "num_base_bdevs_discovered": 1, 00:16:42.662 "num_base_bdevs_operational": 1, 00:16:42.662 "base_bdevs_list": [ 00:16:42.662 { 00:16:42.662 "name": null, 00:16:42.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.662 "is_configured": false, 00:16:42.662 "data_offset": 0, 00:16:42.662 "data_size": 63488 00:16:42.662 }, 00:16:42.662 { 00:16:42.662 "name": "BaseBdev2", 00:16:42.662 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:42.662 "is_configured": true, 00:16:42.662 "data_offset": 2048, 00:16:42.662 "data_size": 63488 00:16:42.662 } 00:16:42.662 ] 00:16:42.662 }' 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.662 19:36:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.231 "name": "raid_bdev1", 00:16:43.231 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:43.231 "strip_size_kb": 0, 00:16:43.231 "state": "online", 00:16:43.231 "raid_level": "raid1", 00:16:43.231 "superblock": true, 00:16:43.231 "num_base_bdevs": 2, 00:16:43.231 "num_base_bdevs_discovered": 1, 00:16:43.231 "num_base_bdevs_operational": 1, 00:16:43.231 "base_bdevs_list": [ 00:16:43.231 { 00:16:43.231 "name": null, 00:16:43.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.231 "is_configured": false, 00:16:43.231 "data_offset": 0, 00:16:43.231 "data_size": 63488 00:16:43.231 }, 00:16:43.231 { 00:16:43.231 "name": "BaseBdev2", 00:16:43.231 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:43.231 "is_configured": true, 00:16:43.231 "data_offset": 2048, 00:16:43.231 "data_size": 63488 00:16:43.231 } 00:16:43.231 ] 00:16:43.231 }' 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.231 [2024-12-05 19:36:36.618059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.231 [2024-12-05 19:36:36.618166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.231 [2024-12-05 19:36:36.618205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:43.231 [2024-12-05 19:36:36.618232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.231 [2024-12-05 19:36:36.618912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.231 [2024-12-05 19:36:36.618950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.231 [2024-12-05 19:36:36.619078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.231 [2024-12-05 19:36:36.619099] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.231 [2024-12-05 19:36:36.619121] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.231 [2024-12-05 19:36:36.619138] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.231 BaseBdev1 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.231 19:36:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.612 "name": "raid_bdev1", 00:16:44.612 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:44.612 "strip_size_kb": 0, 00:16:44.612 "state": "online", 00:16:44.612 "raid_level": "raid1", 00:16:44.612 "superblock": true, 00:16:44.612 "num_base_bdevs": 2, 00:16:44.612 "num_base_bdevs_discovered": 1, 00:16:44.612 "num_base_bdevs_operational": 1, 00:16:44.612 "base_bdevs_list": [ 00:16:44.612 { 00:16:44.612 "name": null, 00:16:44.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.612 "is_configured": false, 00:16:44.612 "data_offset": 0, 00:16:44.612 "data_size": 63488 00:16:44.612 }, 00:16:44.612 { 00:16:44.612 "name": "BaseBdev2", 00:16:44.612 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:44.612 "is_configured": true, 00:16:44.612 "data_offset": 2048, 00:16:44.612 "data_size": 63488 00:16:44.612 } 00:16:44.612 ] 00:16:44.612 }' 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.612 19:36:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.871 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.872 "name": "raid_bdev1", 00:16:44.872 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:44.872 "strip_size_kb": 0, 00:16:44.872 "state": "online", 00:16:44.872 "raid_level": "raid1", 00:16:44.872 "superblock": true, 00:16:44.872 "num_base_bdevs": 2, 00:16:44.872 "num_base_bdevs_discovered": 1, 00:16:44.872 "num_base_bdevs_operational": 1, 00:16:44.872 "base_bdevs_list": [ 00:16:44.872 { 00:16:44.872 "name": null, 00:16:44.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.872 "is_configured": false, 00:16:44.872 "data_offset": 0, 00:16:44.872 "data_size": 63488 00:16:44.872 }, 00:16:44.872 { 00:16:44.872 "name": "BaseBdev2", 00:16:44.872 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:44.872 "is_configured": true, 00:16:44.872 "data_offset": 2048, 00:16:44.872 "data_size": 63488 00:16:44.872 } 00:16:44.872 ] 00:16:44.872 }' 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.872 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.131 [2024-12-05 19:36:38.330825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.131 [2024-12-05 19:36:38.331071] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.131 [2024-12-05 19:36:38.331146] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.131 request: 00:16:45.131 { 00:16:45.131 "base_bdev": "BaseBdev1", 00:16:45.131 "raid_bdev": "raid_bdev1", 00:16:45.131 "method": "bdev_raid_add_base_bdev", 00:16:45.131 "req_id": 1 00:16:45.131 } 00:16:45.131 Got JSON-RPC error response 00:16:45.131 response: 00:16:45.131 { 00:16:45.131 "code": -22, 00:16:45.131 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:45.131 } 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:45.131 19:36:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:46.068 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.068 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.069 "name": "raid_bdev1", 00:16:46.069 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:46.069 "strip_size_kb": 0, 00:16:46.069 "state": "online", 00:16:46.069 "raid_level": "raid1", 00:16:46.069 "superblock": true, 00:16:46.069 "num_base_bdevs": 2, 00:16:46.069 "num_base_bdevs_discovered": 1, 00:16:46.069 "num_base_bdevs_operational": 1, 00:16:46.069 "base_bdevs_list": [ 00:16:46.069 { 00:16:46.069 "name": null, 00:16:46.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.069 "is_configured": false, 00:16:46.069 "data_offset": 0, 00:16:46.069 "data_size": 63488 00:16:46.069 }, 00:16:46.069 { 00:16:46.069 "name": "BaseBdev2", 00:16:46.069 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:46.069 "is_configured": true, 00:16:46.069 "data_offset": 2048, 00:16:46.069 "data_size": 63488 00:16:46.069 } 00:16:46.069 ] 00:16:46.069 }' 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.069 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.637 "name": "raid_bdev1", 00:16:46.637 "uuid": "eaf801a1-75f9-48b6-ad82-4e1f7bda7c16", 00:16:46.637 "strip_size_kb": 0, 00:16:46.637 "state": "online", 00:16:46.637 "raid_level": "raid1", 00:16:46.637 "superblock": true, 00:16:46.637 "num_base_bdevs": 2, 00:16:46.637 "num_base_bdevs_discovered": 1, 00:16:46.637 "num_base_bdevs_operational": 1, 00:16:46.637 "base_bdevs_list": [ 00:16:46.637 { 00:16:46.637 "name": null, 00:16:46.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.637 "is_configured": false, 00:16:46.637 "data_offset": 0, 00:16:46.637 "data_size": 63488 00:16:46.637 }, 00:16:46.637 { 00:16:46.637 "name": "BaseBdev2", 00:16:46.637 "uuid": "b7fa7565-f93a-5c7a-8a68-d923003575b5", 00:16:46.637 "is_configured": true, 00:16:46.637 "data_offset": 2048, 00:16:46.637 "data_size": 63488 00:16:46.637 } 00:16:46.637 ] 00:16:46.637 }' 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.637 19:36:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75924 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75924 ']' 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75924 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75924 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.637 killing process with pid 75924 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75924' 00:16:46.637 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75924 00:16:46.637 Received shutdown signal, test time was about 60.000000 seconds 00:16:46.637 00:16:46.637 Latency(us) 00:16:46.637 [2024-12-05T19:36:40.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.637 [2024-12-05T19:36:40.078Z] =================================================================================================================== 00:16:46.637 [2024-12-05T19:36:40.079Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.638 [2024-12-05 19:36:40.065332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.638 19:36:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75924 00:16:46.638 [2024-12-05 19:36:40.065496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.638 [2024-12-05 19:36:40.065566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.638 [2024-12-05 19:36:40.065586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:46.897 [2024-12-05 19:36:40.330251] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.274 00:16:48.274 real 0m28.081s 00:16:48.274 user 0m33.413s 00:16:48.274 sys 0m4.465s 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.274 ************************************ 00:16:48.274 END TEST raid_rebuild_test_sb 00:16:48.274 ************************************ 00:16:48.274 19:36:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:48.274 19:36:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:48.274 19:36:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.274 19:36:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.274 ************************************ 00:16:48.274 START TEST raid_rebuild_test_io 00:16:48.274 ************************************ 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76697 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76697 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76697 ']' 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.274 19:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.274 [2024-12-05 19:36:41.487282] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:16:48.274 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.274 Zero copy mechanism will not be used. 00:16:48.274 [2024-12-05 19:36:41.487449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76697 ] 00:16:48.274 [2024-12-05 19:36:41.661346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.533 [2024-12-05 19:36:41.790996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.791 [2024-12-05 19:36:41.992084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.791 [2024-12-05 19:36:41.992354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.051 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.313 BaseBdev1_malloc 00:16:49.313 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.313 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.313 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.313 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.313 [2024-12-05 19:36:42.535880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.313 [2024-12-05 19:36:42.536107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.313 [2024-12-05 19:36:42.536151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.313 [2024-12-05 19:36:42.536172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.314 [2024-12-05 19:36:42.538988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.314 [2024-12-05 19:36:42.539040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.314 BaseBdev1 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 BaseBdev2_malloc 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 [2024-12-05 19:36:42.589255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.314 [2024-12-05 19:36:42.589365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.314 [2024-12-05 19:36:42.589398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.314 [2024-12-05 19:36:42.589416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.314 [2024-12-05 19:36:42.592296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.314 [2024-12-05 19:36:42.592539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.314 BaseBdev2 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 spare_malloc 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 spare_delay 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 [2024-12-05 19:36:42.661943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.314 [2024-12-05 19:36:42.662018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.314 [2024-12-05 19:36:42.662048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:49.314 [2024-12-05 19:36:42.662066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.314 [2024-12-05 19:36:42.664927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.314 [2024-12-05 19:36:42.664980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.314 spare 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 [2024-12-05 19:36:42.670010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.314 [2024-12-05 19:36:42.672512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.314 [2024-12-05 19:36:42.672838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.314 [2024-12-05 19:36:42.672870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:49.314 [2024-12-05 19:36:42.673216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:49.314 [2024-12-05 19:36:42.673422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.314 [2024-12-05 19:36:42.673440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.314 [2024-12-05 19:36:42.673607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.314 "name": "raid_bdev1", 00:16:49.314 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:49.314 "strip_size_kb": 0, 00:16:49.314 "state": "online", 00:16:49.314 "raid_level": "raid1", 00:16:49.314 "superblock": false, 00:16:49.314 "num_base_bdevs": 2, 00:16:49.314 "num_base_bdevs_discovered": 2, 00:16:49.314 "num_base_bdevs_operational": 2, 00:16:49.314 "base_bdevs_list": [ 00:16:49.314 { 00:16:49.314 "name": "BaseBdev1", 00:16:49.314 "uuid": "4a49002b-7b9b-5029-837a-45b76571a6bf", 00:16:49.314 "is_configured": true, 00:16:49.314 "data_offset": 0, 00:16:49.314 "data_size": 65536 00:16:49.314 }, 00:16:49.314 { 00:16:49.314 "name": "BaseBdev2", 00:16:49.314 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:49.314 "is_configured": true, 00:16:49.314 "data_offset": 0, 00:16:49.314 "data_size": 65536 00:16:49.314 } 00:16:49.314 ] 00:16:49.314 }' 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.314 19:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:49.882 [2024-12-05 19:36:43.162540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.882 [2024-12-05 19:36:43.258208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.882 "name": "raid_bdev1", 00:16:49.882 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:49.882 "strip_size_kb": 0, 00:16:49.882 "state": "online", 00:16:49.882 "raid_level": "raid1", 00:16:49.882 "superblock": false, 00:16:49.882 "num_base_bdevs": 2, 00:16:49.882 "num_base_bdevs_discovered": 1, 00:16:49.882 "num_base_bdevs_operational": 1, 00:16:49.882 "base_bdevs_list": [ 00:16:49.882 { 00:16:49.882 "name": null, 00:16:49.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.882 "is_configured": false, 00:16:49.882 "data_offset": 0, 00:16:49.882 "data_size": 65536 00:16:49.882 }, 00:16:49.882 { 00:16:49.882 "name": "BaseBdev2", 00:16:49.882 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:49.882 "is_configured": true, 00:16:49.882 "data_offset": 0, 00:16:49.882 "data_size": 65536 00:16:49.882 } 00:16:49.882 ] 00:16:49.882 }' 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.882 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.142 [2024-12-05 19:36:43.386617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:50.142 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:50.142 Zero copy mechanism will not be used. 00:16:50.142 Running I/O for 60 seconds... 00:16:50.401 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.401 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.401 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.401 [2024-12-05 19:36:43.778977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.401 19:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.401 19:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:50.661 [2024-12-05 19:36:43.846654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:50.661 [2024-12-05 19:36:43.849390] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.661 [2024-12-05 19:36:43.966820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:50.661 [2024-12-05 19:36:43.967477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:50.919 [2024-12-05 19:36:44.203244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:50.919 [2024-12-05 19:36:44.203583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:51.178 153.00 IOPS, 459.00 MiB/s [2024-12-05T19:36:44.619Z] [2024-12-05 19:36:44.551607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:51.178 [2024-12-05 19:36:44.559155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:51.437 [2024-12-05 19:36:44.785781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:51.437 [2024-12-05 19:36:44.786174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.437 19:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.695 "name": "raid_bdev1", 00:16:51.695 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:51.695 "strip_size_kb": 0, 00:16:51.695 "state": "online", 00:16:51.695 "raid_level": "raid1", 00:16:51.695 "superblock": false, 00:16:51.695 "num_base_bdevs": 2, 00:16:51.695 "num_base_bdevs_discovered": 2, 00:16:51.695 "num_base_bdevs_operational": 2, 00:16:51.695 "process": { 00:16:51.695 "type": "rebuild", 00:16:51.695 "target": "spare", 00:16:51.695 "progress": { 00:16:51.695 "blocks": 10240, 00:16:51.695 "percent": 15 00:16:51.695 } 00:16:51.695 }, 00:16:51.695 "base_bdevs_list": [ 00:16:51.695 { 00:16:51.695 "name": "spare", 00:16:51.695 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:51.695 "is_configured": true, 00:16:51.695 "data_offset": 0, 00:16:51.695 "data_size": 65536 00:16:51.695 }, 00:16:51.695 { 00:16:51.695 "name": "BaseBdev2", 00:16:51.695 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:51.695 "is_configured": true, 00:16:51.695 "data_offset": 0, 00:16:51.695 "data_size": 65536 00:16:51.695 } 00:16:51.695 ] 00:16:51.695 }' 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.695 19:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.695 [2024-12-05 19:36:44.992852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.695 [2024-12-05 19:36:45.133010] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.952 [2024-12-05 19:36:45.143643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.952 [2024-12-05 19:36:45.143732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.952 [2024-12-05 19:36:45.143818] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.952 [2024-12-05 19:36:45.170945] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:51.952 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.952 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.952 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.952 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.952 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.953 "name": "raid_bdev1", 00:16:51.953 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:51.953 "strip_size_kb": 0, 00:16:51.953 "state": "online", 00:16:51.953 "raid_level": "raid1", 00:16:51.953 "superblock": false, 00:16:51.953 "num_base_bdevs": 2, 00:16:51.953 "num_base_bdevs_discovered": 1, 00:16:51.953 "num_base_bdevs_operational": 1, 00:16:51.953 "base_bdevs_list": [ 00:16:51.953 { 00:16:51.953 "name": null, 00:16:51.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.953 "is_configured": false, 00:16:51.953 "data_offset": 0, 00:16:51.953 "data_size": 65536 00:16:51.953 }, 00:16:51.953 { 00:16:51.953 "name": "BaseBdev2", 00:16:51.953 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:51.953 "is_configured": true, 00:16:51.953 "data_offset": 0, 00:16:51.953 "data_size": 65536 00:16:51.953 } 00:16:51.953 ] 00:16:51.953 }' 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.953 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.514 123.00 IOPS, 369.00 MiB/s [2024-12-05T19:36:45.955Z] 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.514 "name": "raid_bdev1", 00:16:52.514 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:52.514 "strip_size_kb": 0, 00:16:52.514 "state": "online", 00:16:52.514 "raid_level": "raid1", 00:16:52.514 "superblock": false, 00:16:52.514 "num_base_bdevs": 2, 00:16:52.514 "num_base_bdevs_discovered": 1, 00:16:52.514 "num_base_bdevs_operational": 1, 00:16:52.514 "base_bdevs_list": [ 00:16:52.514 { 00:16:52.514 "name": null, 00:16:52.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.514 "is_configured": false, 00:16:52.514 "data_offset": 0, 00:16:52.514 "data_size": 65536 00:16:52.514 }, 00:16:52.514 { 00:16:52.514 "name": "BaseBdev2", 00:16:52.514 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:52.514 "is_configured": true, 00:16:52.514 "data_offset": 0, 00:16:52.514 "data_size": 65536 00:16:52.514 } 00:16:52.514 ] 00:16:52.514 }' 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.514 [2024-12-05 19:36:45.891978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.514 19:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:52.514 [2024-12-05 19:36:45.940874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:52.514 [2024-12-05 19:36:45.943383] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.772 [2024-12-05 19:36:46.076338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:52.772 [2024-12-05 19:36:46.077276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:53.030 [2024-12-05 19:36:46.304802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:53.030 [2024-12-05 19:36:46.305142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:53.289 137.33 IOPS, 412.00 MiB/s [2024-12-05T19:36:46.730Z] [2024-12-05 19:36:46.664623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:53.547 [2024-12-05 19:36:46.788474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.547 19:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.805 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.805 "name": "raid_bdev1", 00:16:53.805 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:53.805 "strip_size_kb": 0, 00:16:53.805 "state": "online", 00:16:53.805 "raid_level": "raid1", 00:16:53.805 "superblock": false, 00:16:53.805 "num_base_bdevs": 2, 00:16:53.805 "num_base_bdevs_discovered": 2, 00:16:53.805 "num_base_bdevs_operational": 2, 00:16:53.805 "process": { 00:16:53.805 "type": "rebuild", 00:16:53.805 "target": "spare", 00:16:53.805 "progress": { 00:16:53.805 "blocks": 10240, 00:16:53.805 "percent": 15 00:16:53.805 } 00:16:53.805 }, 00:16:53.805 "base_bdevs_list": [ 00:16:53.805 { 00:16:53.805 "name": "spare", 00:16:53.805 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:53.805 "is_configured": true, 00:16:53.805 "data_offset": 0, 00:16:53.805 "data_size": 65536 00:16:53.805 }, 00:16:53.805 { 00:16:53.805 "name": "BaseBdev2", 00:16:53.805 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:53.805 "is_configured": true, 00:16:53.805 "data_offset": 0, 00:16:53.805 "data_size": 65536 00:16:53.805 } 00:16:53.805 ] 00:16:53.805 }' 00:16:53.805 19:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.805 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 [2024-12-05 19:36:47.110803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.806 "name": "raid_bdev1", 00:16:53.806 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:53.806 "strip_size_kb": 0, 00:16:53.806 "state": "online", 00:16:53.806 "raid_level": "raid1", 00:16:53.806 "superblock": false, 00:16:53.806 "num_base_bdevs": 2, 00:16:53.806 "num_base_bdevs_discovered": 2, 00:16:53.806 "num_base_bdevs_operational": 2, 00:16:53.806 "process": { 00:16:53.806 "type": "rebuild", 00:16:53.806 "target": "spare", 00:16:53.806 "progress": { 00:16:53.806 "blocks": 14336, 00:16:53.806 "percent": 21 00:16:53.806 } 00:16:53.806 }, 00:16:53.806 "base_bdevs_list": [ 00:16:53.806 { 00:16:53.806 "name": "spare", 00:16:53.806 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:53.806 "is_configured": true, 00:16:53.806 "data_offset": 0, 00:16:53.806 "data_size": 65536 00:16:53.806 }, 00:16:53.806 { 00:16:53.806 "name": "BaseBdev2", 00:16:53.806 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:53.806 "is_configured": true, 00:16:53.806 "data_offset": 0, 00:16:53.806 "data_size": 65536 00:16:53.806 } 00:16:53.806 ] 00:16:53.806 }' 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.806 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.806 [2024-12-05 19:36:47.229159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:53.806 [2024-12-05 19:36:47.229615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:54.064 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.064 19:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.064 132.75 IOPS, 398.25 MiB/s [2024-12-05T19:36:47.505Z] [2024-12-05 19:36:47.480050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:54.322 [2024-12-05 19:36:47.699884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:54.903 [2024-12-05 19:36:48.233070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.903 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.903 "name": "raid_bdev1", 00:16:54.903 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:54.903 "strip_size_kb": 0, 00:16:54.903 "state": "online", 00:16:54.903 "raid_level": "raid1", 00:16:54.903 "superblock": false, 00:16:54.903 "num_base_bdevs": 2, 00:16:54.903 "num_base_bdevs_discovered": 2, 00:16:54.903 "num_base_bdevs_operational": 2, 00:16:54.903 "process": { 00:16:54.903 "type": "rebuild", 00:16:54.904 "target": "spare", 00:16:54.904 "progress": { 00:16:54.904 "blocks": 32768, 00:16:54.904 "percent": 50 00:16:54.904 } 00:16:54.904 }, 00:16:54.904 "base_bdevs_list": [ 00:16:54.904 { 00:16:54.904 "name": "spare", 00:16:54.904 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:54.904 "is_configured": true, 00:16:54.904 "data_offset": 0, 00:16:54.904 "data_size": 65536 00:16:54.904 }, 00:16:54.904 { 00:16:54.904 "name": "BaseBdev2", 00:16:54.904 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:54.904 "is_configured": true, 00:16:54.904 "data_offset": 0, 00:16:54.904 "data_size": 65536 00:16:54.904 } 00:16:54.904 ] 00:16:54.904 }' 00:16:55.165 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.165 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.165 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.165 115.60 IOPS, 346.80 MiB/s [2024-12-05T19:36:48.606Z] 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.165 19:36:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.424 [2024-12-05 19:36:48.700685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:55.683 [2024-12-05 19:36:48.922066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:55.683 [2024-12-05 19:36:48.922481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:56.251 107.50 IOPS, 322.50 MiB/s [2024-12-05T19:36:49.692Z] 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.251 "name": "raid_bdev1", 00:16:56.251 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:56.251 "strip_size_kb": 0, 00:16:56.251 "state": "online", 00:16:56.251 "raid_level": "raid1", 00:16:56.251 "superblock": false, 00:16:56.251 "num_base_bdevs": 2, 00:16:56.251 "num_base_bdevs_discovered": 2, 00:16:56.251 "num_base_bdevs_operational": 2, 00:16:56.251 "process": { 00:16:56.251 "type": "rebuild", 00:16:56.251 "target": "spare", 00:16:56.251 "progress": { 00:16:56.251 "blocks": 49152, 00:16:56.251 "percent": 75 00:16:56.251 } 00:16:56.251 }, 00:16:56.251 "base_bdevs_list": [ 00:16:56.251 { 00:16:56.251 "name": "spare", 00:16:56.251 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:56.251 "is_configured": true, 00:16:56.251 "data_offset": 0, 00:16:56.251 "data_size": 65536 00:16:56.251 }, 00:16:56.251 { 00:16:56.251 "name": "BaseBdev2", 00:16:56.251 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:56.251 "is_configured": true, 00:16:56.251 "data_offset": 0, 00:16:56.251 "data_size": 65536 00:16:56.251 } 00:16:56.251 ] 00:16:56.251 }' 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.251 19:36:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.251 [2024-12-05 19:36:49.633899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:56.819 [2024-12-05 19:36:50.058403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:57.079 95.43 IOPS, 286.29 MiB/s [2024-12-05T19:36:50.520Z] [2024-12-05 19:36:50.506102] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:57.338 [2024-12-05 19:36:50.606131] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:57.338 [2024-12-05 19:36:50.608895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.338 "name": "raid_bdev1", 00:16:57.338 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:57.338 "strip_size_kb": 0, 00:16:57.338 "state": "online", 00:16:57.338 "raid_level": "raid1", 00:16:57.338 "superblock": false, 00:16:57.338 "num_base_bdevs": 2, 00:16:57.338 "num_base_bdevs_discovered": 2, 00:16:57.338 "num_base_bdevs_operational": 2, 00:16:57.338 "base_bdevs_list": [ 00:16:57.338 { 00:16:57.338 "name": "spare", 00:16:57.338 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:57.338 "is_configured": true, 00:16:57.338 "data_offset": 0, 00:16:57.338 "data_size": 65536 00:16:57.338 }, 00:16:57.338 { 00:16:57.338 "name": "BaseBdev2", 00:16:57.338 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:57.338 "is_configured": true, 00:16:57.338 "data_offset": 0, 00:16:57.338 "data_size": 65536 00:16:57.338 } 00:16:57.338 ] 00:16:57.338 }' 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:57.338 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.597 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:57.597 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.598 "name": "raid_bdev1", 00:16:57.598 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:57.598 "strip_size_kb": 0, 00:16:57.598 "state": "online", 00:16:57.598 "raid_level": "raid1", 00:16:57.598 "superblock": false, 00:16:57.598 "num_base_bdevs": 2, 00:16:57.598 "num_base_bdevs_discovered": 2, 00:16:57.598 "num_base_bdevs_operational": 2, 00:16:57.598 "base_bdevs_list": [ 00:16:57.598 { 00:16:57.598 "name": "spare", 00:16:57.598 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:57.598 "is_configured": true, 00:16:57.598 "data_offset": 0, 00:16:57.598 "data_size": 65536 00:16:57.598 }, 00:16:57.598 { 00:16:57.598 "name": "BaseBdev2", 00:16:57.598 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:57.598 "is_configured": true, 00:16:57.598 "data_offset": 0, 00:16:57.598 "data_size": 65536 00:16:57.598 } 00:16:57.598 ] 00:16:57.598 }' 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.598 19:36:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.598 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.598 "name": "raid_bdev1", 00:16:57.598 "uuid": "1ce38534-163a-4a08-aaf5-1ad6d7a8b09d", 00:16:57.598 "strip_size_kb": 0, 00:16:57.598 "state": "online", 00:16:57.598 "raid_level": "raid1", 00:16:57.598 "superblock": false, 00:16:57.598 "num_base_bdevs": 2, 00:16:57.598 "num_base_bdevs_discovered": 2, 00:16:57.598 "num_base_bdevs_operational": 2, 00:16:57.598 "base_bdevs_list": [ 00:16:57.598 { 00:16:57.598 "name": "spare", 00:16:57.598 "uuid": "96684040-1236-51e3-95f7-3f28a360ca98", 00:16:57.598 "is_configured": true, 00:16:57.598 "data_offset": 0, 00:16:57.598 "data_size": 65536 00:16:57.598 }, 00:16:57.598 { 00:16:57.598 "name": "BaseBdev2", 00:16:57.598 "uuid": "7b494d61-0be2-50c2-8ed9-9527f1c58877", 00:16:57.598 "is_configured": true, 00:16:57.598 "data_offset": 0, 00:16:57.598 "data_size": 65536 00:16:57.598 } 00:16:57.598 ] 00:16:57.598 }' 00:16:57.598 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.598 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.166 88.00 IOPS, 264.00 MiB/s [2024-12-05T19:36:51.607Z] 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.166 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.166 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.166 [2024-12-05 19:36:51.511977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.166 [2024-12-05 19:36:51.512228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.166 00:16:58.166 Latency(us) 00:16:58.166 [2024-12-05T19:36:51.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.166 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:58.166 raid_bdev1 : 8.20 86.36 259.08 0.00 0.00 15591.23 266.24 118203.11 00:16:58.166 [2024-12-05T19:36:51.607Z] =================================================================================================================== 00:16:58.166 [2024-12-05T19:36:51.607Z] Total : 86.36 259.08 0.00 0.00 15591.23 266.24 118203.11 00:16:58.425 { 00:16:58.425 "results": [ 00:16:58.425 { 00:16:58.425 "job": "raid_bdev1", 00:16:58.425 "core_mask": "0x1", 00:16:58.425 "workload": "randrw", 00:16:58.425 "percentage": 50, 00:16:58.425 "status": "finished", 00:16:58.425 "queue_depth": 2, 00:16:58.425 "io_size": 3145728, 00:16:58.425 "runtime": 8.198372, 00:16:58.425 "iops": 86.3586087579339, 00:16:58.425 "mibps": 259.0758262738017, 00:16:58.425 "io_failed": 0, 00:16:58.425 "io_timeout": 0, 00:16:58.425 "avg_latency_us": 15591.228351309705, 00:16:58.425 "min_latency_us": 266.24, 00:16:58.425 "max_latency_us": 118203.11272727273 00:16:58.425 } 00:16:58.425 ], 00:16:58.425 "core_count": 1 00:16:58.425 } 00:16:58.425 [2024-12-05 19:36:51.606887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.425 [2024-12-05 19:36:51.606972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.425 [2024-12-05 19:36:51.607085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.425 [2024-12-05 19:36:51.607115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.425 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:58.685 /dev/nbd0 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.685 1+0 records in 00:16:58.685 1+0 records out 00:16:58.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668697 s, 6.1 MB/s 00:16:58.685 19:36:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.685 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:58.943 /dev/nbd1 00:16:58.943 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.944 1+0 records in 00:16:58.944 1+0 records out 00:16:58.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062202 s, 6.6 MB/s 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.944 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.202 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:59.460 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:59.460 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.461 19:36:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76697 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76697 ']' 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76697 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76697 00:16:59.719 killing process with pid 76697 00:16:59.719 Received shutdown signal, test time was about 9.755249 seconds 00:16:59.719 00:16:59.719 Latency(us) 00:16:59.719 [2024-12-05T19:36:53.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.719 [2024-12-05T19:36:53.160Z] =================================================================================================================== 00:16:59.719 [2024-12-05T19:36:53.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76697' 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76697 00:16:59.719 [2024-12-05 19:36:53.144824] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.719 19:36:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76697 00:16:59.978 [2024-12-05 19:36:53.349690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:01.368 00:17:01.368 real 0m13.058s 00:17:01.368 user 0m17.102s 00:17:01.368 sys 0m1.458s 00:17:01.368 ************************************ 00:17:01.368 END TEST raid_rebuild_test_io 00:17:01.368 ************************************ 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.368 19:36:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:01.368 19:36:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:01.368 19:36:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.368 19:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.368 ************************************ 00:17:01.368 START TEST raid_rebuild_test_sb_io 00:17:01.368 ************************************ 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77083 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77083 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77083 ']' 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.368 19:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:01.368 Zero copy mechanism will not be used. 00:17:01.368 [2024-12-05 19:36:54.633334] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:17:01.368 [2024-12-05 19:36:54.633554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77083 ] 00:17:01.627 [2024-12-05 19:36:54.816599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.627 [2024-12-05 19:36:54.956330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.885 [2024-12-05 19:36:55.163563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.885 [2024-12-05 19:36:55.163605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.451 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.451 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 BaseBdev1_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 [2024-12-05 19:36:55.638179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.452 [2024-12-05 19:36:55.638400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.452 [2024-12-05 19:36:55.638479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.452 [2024-12-05 19:36:55.638679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.452 [2024-12-05 19:36:55.641577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.452 BaseBdev1 00:17:02.452 [2024-12-05 19:36:55.641761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 BaseBdev2_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 [2024-12-05 19:36:55.694614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:02.452 [2024-12-05 19:36:55.694844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.452 [2024-12-05 19:36:55.694889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.452 [2024-12-05 19:36:55.694919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.452 [2024-12-05 19:36:55.697629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.452 [2024-12-05 19:36:55.697680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.452 BaseBdev2 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 spare_malloc 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 spare_delay 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 [2024-12-05 19:36:55.767018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.452 [2024-12-05 19:36:55.767274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.452 [2024-12-05 19:36:55.767351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:02.452 [2024-12-05 19:36:55.767573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.452 [2024-12-05 19:36:55.770682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.452 [2024-12-05 19:36:55.770911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.452 spare 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 [2024-12-05 19:36:55.775271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.452 [2024-12-05 19:36:55.777882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.452 [2024-12-05 19:36:55.778113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:02.452 [2024-12-05 19:36:55.778135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.452 [2024-12-05 19:36:55.778417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:02.452 [2024-12-05 19:36:55.778617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:02.452 [2024-12-05 19:36:55.778632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:02.452 [2024-12-05 19:36:55.778883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.452 "name": "raid_bdev1", 00:17:02.452 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:02.452 "strip_size_kb": 0, 00:17:02.452 "state": "online", 00:17:02.452 "raid_level": "raid1", 00:17:02.452 "superblock": true, 00:17:02.452 "num_base_bdevs": 2, 00:17:02.452 "num_base_bdevs_discovered": 2, 00:17:02.452 "num_base_bdevs_operational": 2, 00:17:02.452 "base_bdevs_list": [ 00:17:02.452 { 00:17:02.452 "name": "BaseBdev1", 00:17:02.452 "uuid": "f3a49568-7dd9-5716-aaf0-b5eaeca75805", 00:17:02.452 "is_configured": true, 00:17:02.452 "data_offset": 2048, 00:17:02.452 "data_size": 63488 00:17:02.452 }, 00:17:02.452 { 00:17:02.452 "name": "BaseBdev2", 00:17:02.452 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:02.452 "is_configured": true, 00:17:02.452 "data_offset": 2048, 00:17:02.452 "data_size": 63488 00:17:02.452 } 00:17:02.452 ] 00:17:02.452 }' 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.452 19:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.020 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.021 [2024-12-05 19:36:56.295836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.021 [2024-12-05 19:36:56.395470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.021 "name": "raid_bdev1", 00:17:03.021 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:03.021 "strip_size_kb": 0, 00:17:03.021 "state": "online", 00:17:03.021 "raid_level": "raid1", 00:17:03.021 "superblock": true, 00:17:03.021 "num_base_bdevs": 2, 00:17:03.021 "num_base_bdevs_discovered": 1, 00:17:03.021 "num_base_bdevs_operational": 1, 00:17:03.021 "base_bdevs_list": [ 00:17:03.021 { 00:17:03.021 "name": null, 00:17:03.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.021 "is_configured": false, 00:17:03.021 "data_offset": 0, 00:17:03.021 "data_size": 63488 00:17:03.021 }, 00:17:03.021 { 00:17:03.021 "name": "BaseBdev2", 00:17:03.021 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:03.021 "is_configured": true, 00:17:03.021 "data_offset": 2048, 00:17:03.021 "data_size": 63488 00:17:03.021 } 00:17:03.021 ] 00:17:03.021 }' 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.021 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.281 [2024-12-05 19:36:56.519968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:03.281 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:03.281 Zero copy mechanism will not be used. 00:17:03.281 Running I/O for 60 seconds... 00:17:03.540 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.540 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.540 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.540 [2024-12-05 19:36:56.959383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.800 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.800 19:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:03.800 [2024-12-05 19:36:57.002441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:03.800 [2024-12-05 19:36:57.005132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.800 [2024-12-05 19:36:57.132492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:03.800 [2024-12-05 19:36:57.133264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:04.060 [2024-12-05 19:36:57.353168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:04.060 [2024-12-05 19:36:57.353571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:04.319 171.00 IOPS, 513.00 MiB/s [2024-12-05T19:36:57.760Z] [2024-12-05 19:36:57.700884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:04.578 [2024-12-05 19:36:57.919141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:04.578 [2024-12-05 19:36:57.919517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.578 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.578 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 19:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.838 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.838 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.839 "name": "raid_bdev1", 00:17:04.839 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:04.839 "strip_size_kb": 0, 00:17:04.839 "state": "online", 00:17:04.839 "raid_level": "raid1", 00:17:04.839 "superblock": true, 00:17:04.839 "num_base_bdevs": 2, 00:17:04.839 "num_base_bdevs_discovered": 2, 00:17:04.839 "num_base_bdevs_operational": 2, 00:17:04.839 "process": { 00:17:04.839 "type": "rebuild", 00:17:04.839 "target": "spare", 00:17:04.839 "progress": { 00:17:04.839 "blocks": 10240, 00:17:04.839 "percent": 16 00:17:04.839 } 00:17:04.839 }, 00:17:04.839 "base_bdevs_list": [ 00:17:04.839 { 00:17:04.839 "name": "spare", 00:17:04.839 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": "BaseBdev2", 00:17:04.839 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 } 00:17:04.839 ] 00:17:04.839 }' 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.839 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.839 [2024-12-05 19:36:58.185127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.839 [2024-12-05 19:36:58.262131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:04.839 [2024-12-05 19:36:58.263124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:05.098 [2024-12-05 19:36:58.372213] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.098 [2024-12-05 19:36:58.382962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.098 [2024-12-05 19:36:58.383218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.098 [2024-12-05 19:36:58.383271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.098 [2024-12-05 19:36:58.441275] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.098 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.099 "name": "raid_bdev1", 00:17:05.099 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:05.099 "strip_size_kb": 0, 00:17:05.099 "state": "online", 00:17:05.099 "raid_level": "raid1", 00:17:05.099 "superblock": true, 00:17:05.099 "num_base_bdevs": 2, 00:17:05.099 "num_base_bdevs_discovered": 1, 00:17:05.099 "num_base_bdevs_operational": 1, 00:17:05.099 "base_bdevs_list": [ 00:17:05.099 { 00:17:05.099 "name": null, 00:17:05.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.099 "is_configured": false, 00:17:05.099 "data_offset": 0, 00:17:05.099 "data_size": 63488 00:17:05.099 }, 00:17:05.099 { 00:17:05.099 "name": "BaseBdev2", 00:17:05.099 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:05.099 "is_configured": true, 00:17:05.099 "data_offset": 2048, 00:17:05.099 "data_size": 63488 00:17:05.099 } 00:17:05.099 ] 00:17:05.099 }' 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.099 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.617 127.00 IOPS, 381.00 MiB/s [2024-12-05T19:36:59.058Z] 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.617 19:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.617 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.617 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.617 "name": "raid_bdev1", 00:17:05.617 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:05.617 "strip_size_kb": 0, 00:17:05.617 "state": "online", 00:17:05.617 "raid_level": "raid1", 00:17:05.617 "superblock": true, 00:17:05.617 "num_base_bdevs": 2, 00:17:05.617 "num_base_bdevs_discovered": 1, 00:17:05.617 "num_base_bdevs_operational": 1, 00:17:05.617 "base_bdevs_list": [ 00:17:05.617 { 00:17:05.617 "name": null, 00:17:05.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.617 "is_configured": false, 00:17:05.617 "data_offset": 0, 00:17:05.617 "data_size": 63488 00:17:05.617 }, 00:17:05.617 { 00:17:05.617 "name": "BaseBdev2", 00:17:05.617 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:05.617 "is_configured": true, 00:17:05.617 "data_offset": 2048, 00:17:05.617 "data_size": 63488 00:17:05.617 } 00:17:05.617 ] 00:17:05.617 }' 00:17:05.618 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.877 [2024-12-05 19:36:59.164695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.877 19:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:05.877 [2024-12-05 19:36:59.228361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:05.877 [2024-12-05 19:36:59.230817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.136 [2024-12-05 19:36:59.353621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:06.136 [2024-12-05 19:36:59.354261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:06.136 145.33 IOPS, 436.00 MiB/s [2024-12-05T19:36:59.577Z] [2024-12-05 19:36:59.566221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:06.136 [2024-12-05 19:36:59.566680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:06.704 [2024-12-05 19:36:59.914248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:06.704 [2024-12-05 19:37:00.131807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.963 "name": "raid_bdev1", 00:17:06.963 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:06.963 "strip_size_kb": 0, 00:17:06.963 "state": "online", 00:17:06.963 "raid_level": "raid1", 00:17:06.963 "superblock": true, 00:17:06.963 "num_base_bdevs": 2, 00:17:06.963 "num_base_bdevs_discovered": 2, 00:17:06.963 "num_base_bdevs_operational": 2, 00:17:06.963 "process": { 00:17:06.963 "type": "rebuild", 00:17:06.963 "target": "spare", 00:17:06.963 "progress": { 00:17:06.963 "blocks": 10240, 00:17:06.963 "percent": 16 00:17:06.963 } 00:17:06.963 }, 00:17:06.963 "base_bdevs_list": [ 00:17:06.963 { 00:17:06.963 "name": "spare", 00:17:06.963 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:06.963 "is_configured": true, 00:17:06.963 "data_offset": 2048, 00:17:06.963 "data_size": 63488 00:17:06.963 }, 00:17:06.963 { 00:17:06.963 "name": "BaseBdev2", 00:17:06.963 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:06.963 "is_configured": true, 00:17:06.963 "data_offset": 2048, 00:17:06.963 "data_size": 63488 00:17:06.963 } 00:17:06.963 ] 00:17:06.963 }' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:06.963 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.963 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.964 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.964 [2024-12-05 19:37:00.400959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:06.964 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.223 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.223 "name": "raid_bdev1", 00:17:07.223 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:07.223 "strip_size_kb": 0, 00:17:07.223 "state": "online", 00:17:07.223 "raid_level": "raid1", 00:17:07.223 "superblock": true, 00:17:07.223 "num_base_bdevs": 2, 00:17:07.223 "num_base_bdevs_discovered": 2, 00:17:07.223 "num_base_bdevs_operational": 2, 00:17:07.223 "process": { 00:17:07.223 "type": "rebuild", 00:17:07.223 "target": "spare", 00:17:07.223 "progress": { 00:17:07.223 "blocks": 12288, 00:17:07.223 "percent": 19 00:17:07.223 } 00:17:07.223 }, 00:17:07.223 "base_bdevs_list": [ 00:17:07.223 { 00:17:07.223 "name": "spare", 00:17:07.223 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:07.223 "is_configured": true, 00:17:07.223 "data_offset": 2048, 00:17:07.223 "data_size": 63488 00:17:07.223 }, 00:17:07.223 { 00:17:07.223 "name": "BaseBdev2", 00:17:07.223 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:07.223 "is_configured": true, 00:17:07.223 "data_offset": 2048, 00:17:07.223 "data_size": 63488 00:17:07.223 } 00:17:07.223 ] 00:17:07.223 }' 00:17:07.223 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.223 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.223 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.223 128.25 IOPS, 384.75 MiB/s [2024-12-05T19:37:00.664Z] 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.223 19:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.223 [2024-12-05 19:37:00.570282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:07.482 [2024-12-05 19:37:00.786998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:07.482 [2024-12-05 19:37:00.787544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:07.743 [2024-12-05 19:37:00.924892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:08.313 112.00 IOPS, 336.00 MiB/s [2024-12-05T19:37:01.754Z] 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.313 "name": "raid_bdev1", 00:17:08.313 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:08.313 "strip_size_kb": 0, 00:17:08.313 "state": "online", 00:17:08.313 "raid_level": "raid1", 00:17:08.313 "superblock": true, 00:17:08.313 "num_base_bdevs": 2, 00:17:08.313 "num_base_bdevs_discovered": 2, 00:17:08.313 "num_base_bdevs_operational": 2, 00:17:08.313 "process": { 00:17:08.313 "type": "rebuild", 00:17:08.313 "target": "spare", 00:17:08.313 "progress": { 00:17:08.313 "blocks": 30720, 00:17:08.313 "percent": 48 00:17:08.313 } 00:17:08.313 }, 00:17:08.313 "base_bdevs_list": [ 00:17:08.313 { 00:17:08.313 "name": "spare", 00:17:08.313 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:08.313 "is_configured": true, 00:17:08.313 "data_offset": 2048, 00:17:08.313 "data_size": 63488 00:17:08.313 }, 00:17:08.313 { 00:17:08.313 "name": "BaseBdev2", 00:17:08.313 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:08.313 "is_configured": true, 00:17:08.313 "data_offset": 2048, 00:17:08.313 "data_size": 63488 00:17:08.313 } 00:17:08.313 ] 00:17:08.313 }' 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.313 [2024-12-05 19:37:01.610653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:08.313 [2024-12-05 19:37:01.611386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.313 19:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.572 [2024-12-05 19:37:01.824925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:08.572 [2024-12-05 19:37:01.825375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:08.831 [2024-12-05 19:37:02.204235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:09.401 102.33 IOPS, 307.00 MiB/s [2024-12-05T19:37:02.842Z] [2024-12-05 19:37:02.542324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:09.401 [2024-12-05 19:37:02.543077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.401 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.401 "name": "raid_bdev1", 00:17:09.401 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:09.401 "strip_size_kb": 0, 00:17:09.401 "state": "online", 00:17:09.401 "raid_level": "raid1", 00:17:09.401 "superblock": true, 00:17:09.401 "num_base_bdevs": 2, 00:17:09.401 "num_base_bdevs_discovered": 2, 00:17:09.401 "num_base_bdevs_operational": 2, 00:17:09.401 "process": { 00:17:09.401 "type": "rebuild", 00:17:09.401 "target": "spare", 00:17:09.401 "progress": { 00:17:09.401 "blocks": 47104, 00:17:09.401 "percent": 74 00:17:09.401 } 00:17:09.401 }, 00:17:09.401 "base_bdevs_list": [ 00:17:09.401 { 00:17:09.401 "name": "spare", 00:17:09.401 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:09.401 "is_configured": true, 00:17:09.401 "data_offset": 2048, 00:17:09.401 "data_size": 63488 00:17:09.401 }, 00:17:09.401 { 00:17:09.402 "name": "BaseBdev2", 00:17:09.402 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:09.402 "is_configured": true, 00:17:09.402 "data_offset": 2048, 00:17:09.402 "data_size": 63488 00:17:09.402 } 00:17:09.402 ] 00:17:09.402 }' 00:17:09.402 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.402 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.402 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.660 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.660 19:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.226 92.86 IOPS, 278.57 MiB/s [2024-12-05T19:37:03.667Z] [2024-12-05 19:37:03.563572] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:10.493 [2024-12-05 19:37:03.671880] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:10.493 [2024-12-05 19:37:03.674759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.493 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.493 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.493 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.493 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.494 "name": "raid_bdev1", 00:17:10.494 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:10.494 "strip_size_kb": 0, 00:17:10.494 "state": "online", 00:17:10.494 "raid_level": "raid1", 00:17:10.494 "superblock": true, 00:17:10.494 "num_base_bdevs": 2, 00:17:10.494 "num_base_bdevs_discovered": 2, 00:17:10.494 "num_base_bdevs_operational": 2, 00:17:10.494 "base_bdevs_list": [ 00:17:10.494 { 00:17:10.494 "name": "spare", 00:17:10.494 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:10.494 "is_configured": true, 00:17:10.494 "data_offset": 2048, 00:17:10.494 "data_size": 63488 00:17:10.494 }, 00:17:10.494 { 00:17:10.494 "name": "BaseBdev2", 00:17:10.494 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:10.494 "is_configured": true, 00:17:10.494 "data_offset": 2048, 00:17:10.494 "data_size": 63488 00:17:10.494 } 00:17:10.494 ] 00:17:10.494 }' 00:17:10.494 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.761 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.761 19:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.761 "name": "raid_bdev1", 00:17:10.761 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:10.761 "strip_size_kb": 0, 00:17:10.761 "state": "online", 00:17:10.761 "raid_level": "raid1", 00:17:10.761 "superblock": true, 00:17:10.761 "num_base_bdevs": 2, 00:17:10.761 "num_base_bdevs_discovered": 2, 00:17:10.761 "num_base_bdevs_operational": 2, 00:17:10.761 "base_bdevs_list": [ 00:17:10.761 { 00:17:10.761 "name": "spare", 00:17:10.761 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:10.761 "is_configured": true, 00:17:10.761 "data_offset": 2048, 00:17:10.761 "data_size": 63488 00:17:10.761 }, 00:17:10.761 { 00:17:10.761 "name": "BaseBdev2", 00:17:10.761 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:10.761 "is_configured": true, 00:17:10.761 "data_offset": 2048, 00:17:10.761 "data_size": 63488 00:17:10.761 } 00:17:10.761 ] 00:17:10.761 }' 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.761 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.020 "name": "raid_bdev1", 00:17:11.020 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:11.020 "strip_size_kb": 0, 00:17:11.020 "state": "online", 00:17:11.020 "raid_level": "raid1", 00:17:11.020 "superblock": true, 00:17:11.020 "num_base_bdevs": 2, 00:17:11.020 "num_base_bdevs_discovered": 2, 00:17:11.020 "num_base_bdevs_operational": 2, 00:17:11.020 "base_bdevs_list": [ 00:17:11.020 { 00:17:11.020 "name": "spare", 00:17:11.020 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:11.020 "is_configured": true, 00:17:11.020 "data_offset": 2048, 00:17:11.020 "data_size": 63488 00:17:11.020 }, 00:17:11.020 { 00:17:11.020 "name": "BaseBdev2", 00:17:11.020 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:11.020 "is_configured": true, 00:17:11.020 "data_offset": 2048, 00:17:11.020 "data_size": 63488 00:17:11.020 } 00:17:11.020 ] 00:17:11.020 }' 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.020 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.588 85.88 IOPS, 257.62 MiB/s [2024-12-05T19:37:05.030Z] 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.589 [2024-12-05 19:37:04.740644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.589 [2024-12-05 19:37:04.740680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.589 00:17:11.589 Latency(us) 00:17:11.589 [2024-12-05T19:37:05.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.589 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:11.589 raid_bdev1 : 8.28 83.72 251.16 0.00 0.00 15884.87 268.10 119632.99 00:17:11.589 [2024-12-05T19:37:05.030Z] =================================================================================================================== 00:17:11.589 [2024-12-05T19:37:05.030Z] Total : 83.72 251.16 0.00 0.00 15884.87 268.10 119632.99 00:17:11.589 [2024-12-05 19:37:04.821450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.589 [2024-12-05 19:37:04.821785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.589 [2024-12-05 19:37:04.822054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:17:11.589 "results": [ 00:17:11.589 { 00:17:11.589 "job": "raid_bdev1", 00:17:11.589 "core_mask": "0x1", 00:17:11.589 "workload": "randrw", 00:17:11.589 "percentage": 50, 00:17:11.589 "status": "finished", 00:17:11.589 "queue_depth": 2, 00:17:11.589 "io_size": 3145728, 00:17:11.589 "runtime": 8.277656, 00:17:11.589 "iops": 83.71935243503717, 00:17:11.589 "mibps": 251.1580573051115, 00:17:11.589 "io_failed": 0, 00:17:11.589 "io_timeout": 0, 00:17:11.589 "avg_latency_us": 15884.867500983866, 00:17:11.589 "min_latency_us": 268.1018181818182, 00:17:11.589 "max_latency_us": 119632.98909090909 00:17:11.589 } 00:17:11.589 ], 00:17:11.589 "core_count": 1 00:17:11.589 } 00:17:11.589 ee all in destruct 00:17:11.589 [2024-12-05 19:37:04.822342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.589 19:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:11.847 /dev/nbd0 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.847 1+0 records in 00:17:11.847 1+0 records out 00:17:11.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349026 s, 11.7 MB/s 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.847 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:12.170 /dev/nbd1 00:17:12.170 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.170 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.171 1+0 records in 00:17:12.171 1+0 records out 00:17:12.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399178 s, 10.3 MB/s 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.171 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.428 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.685 19:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.685 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.944 [2024-12-05 19:37:06.282317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.944 [2024-12-05 19:37:06.282393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.944 [2024-12-05 19:37:06.282433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:12.944 [2024-12-05 19:37:06.282452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.944 [2024-12-05 19:37:06.285572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.944 [2024-12-05 19:37:06.285621] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.944 [2024-12-05 19:37:06.285759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.944 [2024-12-05 19:37:06.285832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.944 spare 00:17:12.944 [2024-12-05 19:37:06.286021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.944 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.202 [2024-12-05 19:37:06.386185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.203 [2024-12-05 19:37:06.386235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.203 [2024-12-05 19:37:06.386667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:13.203 [2024-12-05 19:37:06.386985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.203 [2024-12-05 19:37:06.387012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.203 [2024-12-05 19:37:06.387295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.203 "name": "raid_bdev1", 00:17:13.203 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:13.203 "strip_size_kb": 0, 00:17:13.203 "state": "online", 00:17:13.203 "raid_level": "raid1", 00:17:13.203 "superblock": true, 00:17:13.203 "num_base_bdevs": 2, 00:17:13.203 "num_base_bdevs_discovered": 2, 00:17:13.203 "num_base_bdevs_operational": 2, 00:17:13.203 "base_bdevs_list": [ 00:17:13.203 { 00:17:13.203 "name": "spare", 00:17:13.203 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:13.203 "is_configured": true, 00:17:13.203 "data_offset": 2048, 00:17:13.203 "data_size": 63488 00:17:13.203 }, 00:17:13.203 { 00:17:13.203 "name": "BaseBdev2", 00:17:13.203 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:13.203 "is_configured": true, 00:17:13.203 "data_offset": 2048, 00:17:13.203 "data_size": 63488 00:17:13.203 } 00:17:13.203 ] 00:17:13.203 }' 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.203 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.769 "name": "raid_bdev1", 00:17:13.769 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:13.769 "strip_size_kb": 0, 00:17:13.769 "state": "online", 00:17:13.769 "raid_level": "raid1", 00:17:13.769 "superblock": true, 00:17:13.769 "num_base_bdevs": 2, 00:17:13.769 "num_base_bdevs_discovered": 2, 00:17:13.769 "num_base_bdevs_operational": 2, 00:17:13.769 "base_bdevs_list": [ 00:17:13.769 { 00:17:13.769 "name": "spare", 00:17:13.769 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:13.769 "is_configured": true, 00:17:13.769 "data_offset": 2048, 00:17:13.769 "data_size": 63488 00:17:13.769 }, 00:17:13.769 { 00:17:13.769 "name": "BaseBdev2", 00:17:13.769 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:13.769 "is_configured": true, 00:17:13.769 "data_offset": 2048, 00:17:13.769 "data_size": 63488 00:17:13.769 } 00:17:13.769 ] 00:17:13.769 }' 00:17:13.769 19:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 [2024-12-05 19:37:07.147615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.769 "name": "raid_bdev1", 00:17:13.769 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:13.769 "strip_size_kb": 0, 00:17:13.769 "state": "online", 00:17:13.769 "raid_level": "raid1", 00:17:13.769 "superblock": true, 00:17:13.769 "num_base_bdevs": 2, 00:17:13.769 "num_base_bdevs_discovered": 1, 00:17:13.769 "num_base_bdevs_operational": 1, 00:17:13.769 "base_bdevs_list": [ 00:17:13.769 { 00:17:13.769 "name": null, 00:17:13.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.769 "is_configured": false, 00:17:13.769 "data_offset": 0, 00:17:13.769 "data_size": 63488 00:17:13.769 }, 00:17:13.769 { 00:17:13.769 "name": "BaseBdev2", 00:17:13.769 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:13.769 "is_configured": true, 00:17:13.769 "data_offset": 2048, 00:17:13.769 "data_size": 63488 00:17:13.769 } 00:17:13.769 ] 00:17:13.769 }' 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.769 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.337 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.337 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.337 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.337 [2024-12-05 19:37:07.659904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.337 [2024-12-05 19:37:07.660339] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.337 [2024-12-05 19:37:07.660372] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.337 [2024-12-05 19:37:07.660448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.337 [2024-12-05 19:37:07.677140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:17:14.337 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.337 19:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.337 [2024-12-05 19:37:07.679903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.275 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.534 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.534 "name": "raid_bdev1", 00:17:15.534 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:15.534 "strip_size_kb": 0, 00:17:15.534 "state": "online", 00:17:15.534 "raid_level": "raid1", 00:17:15.534 "superblock": true, 00:17:15.534 "num_base_bdevs": 2, 00:17:15.534 "num_base_bdevs_discovered": 2, 00:17:15.534 "num_base_bdevs_operational": 2, 00:17:15.534 "process": { 00:17:15.534 "type": "rebuild", 00:17:15.534 "target": "spare", 00:17:15.534 "progress": { 00:17:15.534 "blocks": 20480, 00:17:15.534 "percent": 32 00:17:15.534 } 00:17:15.534 }, 00:17:15.534 "base_bdevs_list": [ 00:17:15.534 { 00:17:15.534 "name": "spare", 00:17:15.534 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:15.534 "is_configured": true, 00:17:15.534 "data_offset": 2048, 00:17:15.534 "data_size": 63488 00:17:15.534 }, 00:17:15.534 { 00:17:15.534 "name": "BaseBdev2", 00:17:15.534 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:15.534 "is_configured": true, 00:17:15.534 "data_offset": 2048, 00:17:15.534 "data_size": 63488 00:17:15.534 } 00:17:15.534 ] 00:17:15.534 }' 00:17:15.534 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.534 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.535 [2024-12-05 19:37:08.849418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.535 [2024-12-05 19:37:08.889263] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.535 [2024-12-05 19:37:08.889503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.535 [2024-12-05 19:37:08.889541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.535 [2024-12-05 19:37:08.889555] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.535 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.794 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.794 "name": "raid_bdev1", 00:17:15.794 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:15.794 "strip_size_kb": 0, 00:17:15.794 "state": "online", 00:17:15.794 "raid_level": "raid1", 00:17:15.794 "superblock": true, 00:17:15.794 "num_base_bdevs": 2, 00:17:15.794 "num_base_bdevs_discovered": 1, 00:17:15.794 "num_base_bdevs_operational": 1, 00:17:15.794 "base_bdevs_list": [ 00:17:15.794 { 00:17:15.794 "name": null, 00:17:15.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.794 "is_configured": false, 00:17:15.794 "data_offset": 0, 00:17:15.794 "data_size": 63488 00:17:15.794 }, 00:17:15.794 { 00:17:15.794 "name": "BaseBdev2", 00:17:15.794 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:15.794 "is_configured": true, 00:17:15.794 "data_offset": 2048, 00:17:15.794 "data_size": 63488 00:17:15.794 } 00:17:15.794 ] 00:17:15.794 }' 00:17:15.794 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.794 19:37:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.054 19:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.054 19:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.054 19:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.054 [2024-12-05 19:37:09.446541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.054 [2024-12-05 19:37:09.446806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.054 [2024-12-05 19:37:09.446854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:16.054 [2024-12-05 19:37:09.446872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.054 [2024-12-05 19:37:09.447596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.054 [2024-12-05 19:37:09.447653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.054 [2024-12-05 19:37:09.447829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.054 [2024-12-05 19:37:09.447851] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.054 [2024-12-05 19:37:09.447869] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.054 [2024-12-05 19:37:09.447908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.054 [2024-12-05 19:37:09.464360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:17:16.054 spare 00:17:16.054 19:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.054 19:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.054 [2024-12-05 19:37:09.467120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.433 "name": "raid_bdev1", 00:17:17.433 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:17.433 "strip_size_kb": 0, 00:17:17.433 "state": "online", 00:17:17.433 "raid_level": "raid1", 00:17:17.433 "superblock": true, 00:17:17.433 "num_base_bdevs": 2, 00:17:17.433 "num_base_bdevs_discovered": 2, 00:17:17.433 "num_base_bdevs_operational": 2, 00:17:17.433 "process": { 00:17:17.433 "type": "rebuild", 00:17:17.433 "target": "spare", 00:17:17.433 "progress": { 00:17:17.433 "blocks": 20480, 00:17:17.433 "percent": 32 00:17:17.433 } 00:17:17.433 }, 00:17:17.433 "base_bdevs_list": [ 00:17:17.433 { 00:17:17.433 "name": "spare", 00:17:17.433 "uuid": "bdce2e08-7030-5baa-bb8d-022a51802d40", 00:17:17.433 "is_configured": true, 00:17:17.433 "data_offset": 2048, 00:17:17.433 "data_size": 63488 00:17:17.433 }, 00:17:17.433 { 00:17:17.433 "name": "BaseBdev2", 00:17:17.433 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:17.433 "is_configured": true, 00:17:17.433 "data_offset": 2048, 00:17:17.433 "data_size": 63488 00:17:17.433 } 00:17:17.433 ] 00:17:17.433 }' 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.433 [2024-12-05 19:37:10.625008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.433 [2024-12-05 19:37:10.676423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.433 [2024-12-05 19:37:10.676689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.433 [2024-12-05 19:37:10.676740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.433 [2024-12-05 19:37:10.676759] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.433 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.434 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.434 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.434 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.434 "name": "raid_bdev1", 00:17:17.434 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:17.434 "strip_size_kb": 0, 00:17:17.434 "state": "online", 00:17:17.434 "raid_level": "raid1", 00:17:17.434 "superblock": true, 00:17:17.434 "num_base_bdevs": 2, 00:17:17.434 "num_base_bdevs_discovered": 1, 00:17:17.434 "num_base_bdevs_operational": 1, 00:17:17.434 "base_bdevs_list": [ 00:17:17.434 { 00:17:17.434 "name": null, 00:17:17.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.434 "is_configured": false, 00:17:17.434 "data_offset": 0, 00:17:17.434 "data_size": 63488 00:17:17.434 }, 00:17:17.434 { 00:17:17.434 "name": "BaseBdev2", 00:17:17.434 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:17.434 "is_configured": true, 00:17:17.434 "data_offset": 2048, 00:17:17.434 "data_size": 63488 00:17:17.434 } 00:17:17.434 ] 00:17:17.434 }' 00:17:17.434 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.434 19:37:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.020 "name": "raid_bdev1", 00:17:18.020 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:18.020 "strip_size_kb": 0, 00:17:18.020 "state": "online", 00:17:18.020 "raid_level": "raid1", 00:17:18.020 "superblock": true, 00:17:18.020 "num_base_bdevs": 2, 00:17:18.020 "num_base_bdevs_discovered": 1, 00:17:18.020 "num_base_bdevs_operational": 1, 00:17:18.020 "base_bdevs_list": [ 00:17:18.020 { 00:17:18.020 "name": null, 00:17:18.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.020 "is_configured": false, 00:17:18.020 "data_offset": 0, 00:17:18.020 "data_size": 63488 00:17:18.020 }, 00:17:18.020 { 00:17:18.020 "name": "BaseBdev2", 00:17:18.020 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:18.020 "is_configured": true, 00:17:18.020 "data_offset": 2048, 00:17:18.020 "data_size": 63488 00:17:18.020 } 00:17:18.020 ] 00:17:18.020 }' 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.020 [2024-12-05 19:37:11.403417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.020 [2024-12-05 19:37:11.403476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.020 [2024-12-05 19:37:11.403512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:18.020 [2024-12-05 19:37:11.403533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.020 [2024-12-05 19:37:11.404166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.020 [2024-12-05 19:37:11.404205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.020 [2024-12-05 19:37:11.404303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.020 [2024-12-05 19:37:11.404339] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.020 [2024-12-05 19:37:11.404352] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.020 [2024-12-05 19:37:11.404371] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.020 BaseBdev1 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.020 19:37:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.398 "name": "raid_bdev1", 00:17:19.398 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:19.398 "strip_size_kb": 0, 00:17:19.398 "state": "online", 00:17:19.398 "raid_level": "raid1", 00:17:19.398 "superblock": true, 00:17:19.398 "num_base_bdevs": 2, 00:17:19.398 "num_base_bdevs_discovered": 1, 00:17:19.398 "num_base_bdevs_operational": 1, 00:17:19.398 "base_bdevs_list": [ 00:17:19.398 { 00:17:19.398 "name": null, 00:17:19.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.398 "is_configured": false, 00:17:19.398 "data_offset": 0, 00:17:19.398 "data_size": 63488 00:17:19.398 }, 00:17:19.398 { 00:17:19.398 "name": "BaseBdev2", 00:17:19.398 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:19.398 "is_configured": true, 00:17:19.398 "data_offset": 2048, 00:17:19.398 "data_size": 63488 00:17:19.398 } 00:17:19.398 ] 00:17:19.398 }' 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.398 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.657 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.657 "name": "raid_bdev1", 00:17:19.657 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:19.657 "strip_size_kb": 0, 00:17:19.657 "state": "online", 00:17:19.657 "raid_level": "raid1", 00:17:19.657 "superblock": true, 00:17:19.657 "num_base_bdevs": 2, 00:17:19.657 "num_base_bdevs_discovered": 1, 00:17:19.657 "num_base_bdevs_operational": 1, 00:17:19.657 "base_bdevs_list": [ 00:17:19.657 { 00:17:19.657 "name": null, 00:17:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.657 "is_configured": false, 00:17:19.657 "data_offset": 0, 00:17:19.657 "data_size": 63488 00:17:19.658 }, 00:17:19.658 { 00:17:19.658 "name": "BaseBdev2", 00:17:19.658 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:19.658 "is_configured": true, 00:17:19.658 "data_offset": 2048, 00:17:19.658 "data_size": 63488 00:17:19.658 } 00:17:19.658 ] 00:17:19.658 }' 00:17:19.658 19:37:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.658 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.658 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.917 [2024-12-05 19:37:13.112248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.917 [2024-12-05 19:37:13.112623] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.917 [2024-12-05 19:37:13.112649] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.917 request: 00:17:19.917 { 00:17:19.917 "base_bdev": "BaseBdev1", 00:17:19.917 "raid_bdev": "raid_bdev1", 00:17:19.917 "method": "bdev_raid_add_base_bdev", 00:17:19.917 "req_id": 1 00:17:19.917 } 00:17:19.917 Got JSON-RPC error response 00:17:19.917 response: 00:17:19.917 { 00:17:19.917 "code": -22, 00:17:19.917 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.917 } 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.917 19:37:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.853 "name": "raid_bdev1", 00:17:20.853 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:20.853 "strip_size_kb": 0, 00:17:20.853 "state": "online", 00:17:20.853 "raid_level": "raid1", 00:17:20.853 "superblock": true, 00:17:20.853 "num_base_bdevs": 2, 00:17:20.853 "num_base_bdevs_discovered": 1, 00:17:20.853 "num_base_bdevs_operational": 1, 00:17:20.853 "base_bdevs_list": [ 00:17:20.853 { 00:17:20.853 "name": null, 00:17:20.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.853 "is_configured": false, 00:17:20.853 "data_offset": 0, 00:17:20.853 "data_size": 63488 00:17:20.853 }, 00:17:20.853 { 00:17:20.853 "name": "BaseBdev2", 00:17:20.853 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:20.853 "is_configured": true, 00:17:20.853 "data_offset": 2048, 00:17:20.853 "data_size": 63488 00:17:20.853 } 00:17:20.853 ] 00:17:20.853 }' 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.853 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.421 "name": "raid_bdev1", 00:17:21.421 "uuid": "da9f5e68-044b-4368-ab2a-e25fc5ae1fbe", 00:17:21.421 "strip_size_kb": 0, 00:17:21.421 "state": "online", 00:17:21.421 "raid_level": "raid1", 00:17:21.421 "superblock": true, 00:17:21.421 "num_base_bdevs": 2, 00:17:21.421 "num_base_bdevs_discovered": 1, 00:17:21.421 "num_base_bdevs_operational": 1, 00:17:21.421 "base_bdevs_list": [ 00:17:21.421 { 00:17:21.421 "name": null, 00:17:21.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.421 "is_configured": false, 00:17:21.421 "data_offset": 0, 00:17:21.421 "data_size": 63488 00:17:21.421 }, 00:17:21.421 { 00:17:21.421 "name": "BaseBdev2", 00:17:21.421 "uuid": "f76d09cb-7e8b-5899-bb94-3ff89c1f2537", 00:17:21.421 "is_configured": true, 00:17:21.421 "data_offset": 2048, 00:17:21.421 "data_size": 63488 00:17:21.421 } 00:17:21.421 ] 00:17:21.421 }' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77083 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77083 ']' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77083 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77083 00:17:21.421 killing process with pid 77083 00:17:21.421 Received shutdown signal, test time was about 18.321979 seconds 00:17:21.421 00:17:21.421 Latency(us) 00:17:21.421 [2024-12-05T19:37:14.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.421 [2024-12-05T19:37:14.862Z] =================================================================================================================== 00:17:21.421 [2024-12-05T19:37:14.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77083' 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77083 00:17:21.421 [2024-12-05 19:37:14.844920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.421 19:37:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77083 00:17:21.421 [2024-12-05 19:37:14.845094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.421 [2024-12-05 19:37:14.845185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.421 [2024-12-05 19:37:14.845216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.681 [2024-12-05 19:37:15.040179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.057 00:17:23.057 real 0m21.638s 00:17:23.057 user 0m29.422s 00:17:23.057 sys 0m2.046s 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.057 ************************************ 00:17:23.057 END TEST raid_rebuild_test_sb_io 00:17:23.057 ************************************ 00:17:23.057 19:37:16 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:23.057 19:37:16 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:17:23.057 19:37:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:23.057 19:37:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.057 19:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.057 ************************************ 00:17:23.057 START TEST raid_rebuild_test 00:17:23.057 ************************************ 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77785 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77785 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77785 ']' 00:17:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.057 19:37:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.058 [2024-12-05 19:37:16.327449] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:17:23.058 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:23.058 Zero copy mechanism will not be used. 00:17:23.058 [2024-12-05 19:37:16.327908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77785 ] 00:17:23.317 [2024-12-05 19:37:16.517743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.317 [2024-12-05 19:37:16.665432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.589 [2024-12-05 19:37:16.885253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.589 [2024-12-05 19:37:16.885331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 BaseBdev1_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 [2024-12-05 19:37:17.427527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:24.180 [2024-12-05 19:37:17.427603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.180 [2024-12-05 19:37:17.427634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:24.180 [2024-12-05 19:37:17.427653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.180 [2024-12-05 19:37:17.430537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.180 [2024-12-05 19:37:17.430601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.180 BaseBdev1 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 BaseBdev2_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 [2024-12-05 19:37:17.482412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:24.180 [2024-12-05 19:37:17.482503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.180 [2024-12-05 19:37:17.482561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.180 [2024-12-05 19:37:17.482577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.180 [2024-12-05 19:37:17.485440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.180 [2024-12-05 19:37:17.485489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.180 BaseBdev2 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 BaseBdev3_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 [2024-12-05 19:37:17.546896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:24.180 [2024-12-05 19:37:17.546981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.180 [2024-12-05 19:37:17.547015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:24.180 [2024-12-05 19:37:17.547032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.180 [2024-12-05 19:37:17.549907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.180 [2024-12-05 19:37:17.549954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:24.180 BaseBdev3 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 BaseBdev4_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 [2024-12-05 19:37:17.598712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:24.180 [2024-12-05 19:37:17.598822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.180 [2024-12-05 19:37:17.598853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:24.180 [2024-12-05 19:37:17.598870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.180 [2024-12-05 19:37:17.601623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.180 [2024-12-05 19:37:17.601687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:24.180 BaseBdev4 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.180 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.439 spare_malloc 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.439 spare_delay 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.439 [2024-12-05 19:37:17.660300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:24.439 [2024-12-05 19:37:17.660535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.439 [2024-12-05 19:37:17.660572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:24.439 [2024-12-05 19:37:17.660591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.439 [2024-12-05 19:37:17.663522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.439 [2024-12-05 19:37:17.663734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:24.439 spare 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.439 [2024-12-05 19:37:17.672381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.439 [2024-12-05 19:37:17.674809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.439 [2024-12-05 19:37:17.674891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.439 [2024-12-05 19:37:17.674965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:24.439 [2024-12-05 19:37:17.675066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.439 [2024-12-05 19:37:17.675102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:24.439 [2024-12-05 19:37:17.675397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:24.439 [2024-12-05 19:37:17.675588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.439 [2024-12-05 19:37:17.675605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.439 [2024-12-05 19:37:17.675823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.439 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.440 "name": "raid_bdev1", 00:17:24.440 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:24.440 "strip_size_kb": 0, 00:17:24.440 "state": "online", 00:17:24.440 "raid_level": "raid1", 00:17:24.440 "superblock": false, 00:17:24.440 "num_base_bdevs": 4, 00:17:24.440 "num_base_bdevs_discovered": 4, 00:17:24.440 "num_base_bdevs_operational": 4, 00:17:24.440 "base_bdevs_list": [ 00:17:24.440 { 00:17:24.440 "name": "BaseBdev1", 00:17:24.440 "uuid": "41746872-9a91-5f0f-8bd4-bf1803ff8ff3", 00:17:24.440 "is_configured": true, 00:17:24.440 "data_offset": 0, 00:17:24.440 "data_size": 65536 00:17:24.440 }, 00:17:24.440 { 00:17:24.440 "name": "BaseBdev2", 00:17:24.440 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:24.440 "is_configured": true, 00:17:24.440 "data_offset": 0, 00:17:24.440 "data_size": 65536 00:17:24.440 }, 00:17:24.440 { 00:17:24.440 "name": "BaseBdev3", 00:17:24.440 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:24.440 "is_configured": true, 00:17:24.440 "data_offset": 0, 00:17:24.440 "data_size": 65536 00:17:24.440 }, 00:17:24.440 { 00:17:24.440 "name": "BaseBdev4", 00:17:24.440 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:24.440 "is_configured": true, 00:17:24.440 "data_offset": 0, 00:17:24.440 "data_size": 65536 00:17:24.440 } 00:17:24.440 ] 00:17:24.440 }' 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.440 19:37:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:25.007 [2024-12-05 19:37:18.201010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.007 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:25.267 [2024-12-05 19:37:18.592836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:25.267 /dev/nbd0 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.267 1+0 records in 00:17:25.267 1+0 records out 00:17:25.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312863 s, 13.1 MB/s 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:25.267 19:37:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:35.269 65536+0 records in 00:17:35.269 65536+0 records out 00:17:35.269 33554432 bytes (34 MB, 32 MiB) copied, 8.78434 s, 3.8 MB/s 00:17:35.269 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:35.269 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.269 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:35.269 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.270 [2024-12-05 19:37:27.735600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.270 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.271 [2024-12-05 19:37:27.748206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.271 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.271 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:35.271 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.271 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.271 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.272 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.272 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.274 "name": "raid_bdev1", 00:17:35.274 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:35.274 "strip_size_kb": 0, 00:17:35.274 "state": "online", 00:17:35.274 "raid_level": "raid1", 00:17:35.274 "superblock": false, 00:17:35.274 "num_base_bdevs": 4, 00:17:35.274 "num_base_bdevs_discovered": 3, 00:17:35.274 "num_base_bdevs_operational": 3, 00:17:35.274 "base_bdevs_list": [ 00:17:35.274 { 00:17:35.274 "name": null, 00:17:35.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.274 "is_configured": false, 00:17:35.274 "data_offset": 0, 00:17:35.274 "data_size": 65536 00:17:35.274 }, 00:17:35.274 { 00:17:35.274 "name": "BaseBdev2", 00:17:35.274 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:35.274 "is_configured": true, 00:17:35.274 "data_offset": 0, 00:17:35.274 "data_size": 65536 00:17:35.274 }, 00:17:35.274 { 00:17:35.274 "name": "BaseBdev3", 00:17:35.274 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:35.274 "is_configured": true, 00:17:35.274 "data_offset": 0, 00:17:35.274 "data_size": 65536 00:17:35.274 }, 00:17:35.274 { 00:17:35.274 "name": "BaseBdev4", 00:17:35.274 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:35.274 "is_configured": true, 00:17:35.274 "data_offset": 0, 00:17:35.274 "data_size": 65536 00:17:35.274 } 00:17:35.274 ] 00:17:35.274 }' 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.274 19:37:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.274 19:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.274 19:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.274 19:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.274 [2024-12-05 19:37:28.268435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.274 [2024-12-05 19:37:28.283899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:35.275 19:37:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.275 19:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:35.275 [2024-12-05 19:37:28.286786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.210 "name": "raid_bdev1", 00:17:36.210 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:36.210 "strip_size_kb": 0, 00:17:36.210 "state": "online", 00:17:36.210 "raid_level": "raid1", 00:17:36.210 "superblock": false, 00:17:36.210 "num_base_bdevs": 4, 00:17:36.210 "num_base_bdevs_discovered": 4, 00:17:36.210 "num_base_bdevs_operational": 4, 00:17:36.210 "process": { 00:17:36.210 "type": "rebuild", 00:17:36.210 "target": "spare", 00:17:36.210 "progress": { 00:17:36.210 "blocks": 18432, 00:17:36.210 "percent": 28 00:17:36.210 } 00:17:36.210 }, 00:17:36.210 "base_bdevs_list": [ 00:17:36.210 { 00:17:36.210 "name": "spare", 00:17:36.210 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev2", 00:17:36.210 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev3", 00:17:36.210 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev4", 00:17:36.210 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 } 00:17:36.210 ] 00:17:36.210 }' 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 [2024-12-05 19:37:29.449221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.210 [2024-12-05 19:37:29.499431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.210 [2024-12-05 19:37:29.499572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.210 [2024-12-05 19:37:29.499633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.210 [2024-12-05 19:37:29.499649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.210 "name": "raid_bdev1", 00:17:36.210 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:36.210 "strip_size_kb": 0, 00:17:36.210 "state": "online", 00:17:36.210 "raid_level": "raid1", 00:17:36.210 "superblock": false, 00:17:36.210 "num_base_bdevs": 4, 00:17:36.210 "num_base_bdevs_discovered": 3, 00:17:36.210 "num_base_bdevs_operational": 3, 00:17:36.210 "base_bdevs_list": [ 00:17:36.210 { 00:17:36.210 "name": null, 00:17:36.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.210 "is_configured": false, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev2", 00:17:36.210 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev3", 00:17:36.210 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.210 "data_size": 65536 00:17:36.210 }, 00:17:36.210 { 00:17:36.210 "name": "BaseBdev4", 00:17:36.210 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:36.210 "is_configured": true, 00:17:36.210 "data_offset": 0, 00:17:36.211 "data_size": 65536 00:17:36.211 } 00:17:36.211 ] 00:17:36.211 }' 00:17:36.211 19:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.211 19:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.801 "name": "raid_bdev1", 00:17:36.801 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:36.801 "strip_size_kb": 0, 00:17:36.801 "state": "online", 00:17:36.801 "raid_level": "raid1", 00:17:36.801 "superblock": false, 00:17:36.801 "num_base_bdevs": 4, 00:17:36.801 "num_base_bdevs_discovered": 3, 00:17:36.801 "num_base_bdevs_operational": 3, 00:17:36.801 "base_bdevs_list": [ 00:17:36.801 { 00:17:36.801 "name": null, 00:17:36.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.801 "is_configured": false, 00:17:36.801 "data_offset": 0, 00:17:36.801 "data_size": 65536 00:17:36.801 }, 00:17:36.801 { 00:17:36.801 "name": "BaseBdev2", 00:17:36.801 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:36.801 "is_configured": true, 00:17:36.801 "data_offset": 0, 00:17:36.801 "data_size": 65536 00:17:36.801 }, 00:17:36.801 { 00:17:36.801 "name": "BaseBdev3", 00:17:36.801 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:36.801 "is_configured": true, 00:17:36.801 "data_offset": 0, 00:17:36.801 "data_size": 65536 00:17:36.801 }, 00:17:36.801 { 00:17:36.801 "name": "BaseBdev4", 00:17:36.801 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:36.801 "is_configured": true, 00:17:36.801 "data_offset": 0, 00:17:36.801 "data_size": 65536 00:17:36.801 } 00:17:36.801 ] 00:17:36.801 }' 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.801 [2024-12-05 19:37:30.165934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.801 [2024-12-05 19:37:30.180386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.801 19:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:36.801 [2024-12-05 19:37:30.183226] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.177 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.177 "name": "raid_bdev1", 00:17:38.177 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:38.177 "strip_size_kb": 0, 00:17:38.177 "state": "online", 00:17:38.177 "raid_level": "raid1", 00:17:38.177 "superblock": false, 00:17:38.177 "num_base_bdevs": 4, 00:17:38.177 "num_base_bdevs_discovered": 4, 00:17:38.177 "num_base_bdevs_operational": 4, 00:17:38.177 "process": { 00:17:38.177 "type": "rebuild", 00:17:38.177 "target": "spare", 00:17:38.177 "progress": { 00:17:38.177 "blocks": 20480, 00:17:38.177 "percent": 31 00:17:38.177 } 00:17:38.177 }, 00:17:38.177 "base_bdevs_list": [ 00:17:38.177 { 00:17:38.178 "name": "spare", 00:17:38.178 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": "BaseBdev2", 00:17:38.178 "uuid": "93e6a38d-0fb9-58fa-91bc-fd07e7ab3f15", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": "BaseBdev3", 00:17:38.178 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": "BaseBdev4", 00:17:38.178 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 } 00:17:38.178 ] 00:17:38.178 }' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.178 [2024-12-05 19:37:31.345632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.178 [2024-12-05 19:37:31.395492] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.178 "name": "raid_bdev1", 00:17:38.178 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:38.178 "strip_size_kb": 0, 00:17:38.178 "state": "online", 00:17:38.178 "raid_level": "raid1", 00:17:38.178 "superblock": false, 00:17:38.178 "num_base_bdevs": 4, 00:17:38.178 "num_base_bdevs_discovered": 3, 00:17:38.178 "num_base_bdevs_operational": 3, 00:17:38.178 "process": { 00:17:38.178 "type": "rebuild", 00:17:38.178 "target": "spare", 00:17:38.178 "progress": { 00:17:38.178 "blocks": 24576, 00:17:38.178 "percent": 37 00:17:38.178 } 00:17:38.178 }, 00:17:38.178 "base_bdevs_list": [ 00:17:38.178 { 00:17:38.178 "name": "spare", 00:17:38.178 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": null, 00:17:38.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.178 "is_configured": false, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": "BaseBdev3", 00:17:38.178 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 }, 00:17:38.178 { 00:17:38.178 "name": "BaseBdev4", 00:17:38.178 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:38.178 "is_configured": true, 00:17:38.178 "data_offset": 0, 00:17:38.178 "data_size": 65536 00:17:38.178 } 00:17:38.178 ] 00:17:38.178 }' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=485 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.178 19:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.437 "name": "raid_bdev1", 00:17:38.437 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:38.437 "strip_size_kb": 0, 00:17:38.437 "state": "online", 00:17:38.437 "raid_level": "raid1", 00:17:38.437 "superblock": false, 00:17:38.437 "num_base_bdevs": 4, 00:17:38.437 "num_base_bdevs_discovered": 3, 00:17:38.437 "num_base_bdevs_operational": 3, 00:17:38.437 "process": { 00:17:38.437 "type": "rebuild", 00:17:38.437 "target": "spare", 00:17:38.437 "progress": { 00:17:38.437 "blocks": 26624, 00:17:38.437 "percent": 40 00:17:38.437 } 00:17:38.437 }, 00:17:38.437 "base_bdevs_list": [ 00:17:38.437 { 00:17:38.437 "name": "spare", 00:17:38.437 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:38.437 "is_configured": true, 00:17:38.437 "data_offset": 0, 00:17:38.437 "data_size": 65536 00:17:38.437 }, 00:17:38.437 { 00:17:38.437 "name": null, 00:17:38.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.437 "is_configured": false, 00:17:38.437 "data_offset": 0, 00:17:38.437 "data_size": 65536 00:17:38.437 }, 00:17:38.437 { 00:17:38.437 "name": "BaseBdev3", 00:17:38.437 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:38.437 "is_configured": true, 00:17:38.437 "data_offset": 0, 00:17:38.437 "data_size": 65536 00:17:38.437 }, 00:17:38.437 { 00:17:38.437 "name": "BaseBdev4", 00:17:38.437 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:38.437 "is_configured": true, 00:17:38.437 "data_offset": 0, 00:17:38.437 "data_size": 65536 00:17:38.437 } 00:17:38.437 ] 00:17:38.437 }' 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.437 19:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.374 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.374 "name": "raid_bdev1", 00:17:39.374 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:39.374 "strip_size_kb": 0, 00:17:39.374 "state": "online", 00:17:39.374 "raid_level": "raid1", 00:17:39.374 "superblock": false, 00:17:39.374 "num_base_bdevs": 4, 00:17:39.374 "num_base_bdevs_discovered": 3, 00:17:39.374 "num_base_bdevs_operational": 3, 00:17:39.374 "process": { 00:17:39.374 "type": "rebuild", 00:17:39.374 "target": "spare", 00:17:39.374 "progress": { 00:17:39.374 "blocks": 51200, 00:17:39.374 "percent": 78 00:17:39.374 } 00:17:39.374 }, 00:17:39.374 "base_bdevs_list": [ 00:17:39.374 { 00:17:39.374 "name": "spare", 00:17:39.374 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:39.374 "is_configured": true, 00:17:39.374 "data_offset": 0, 00:17:39.374 "data_size": 65536 00:17:39.374 }, 00:17:39.374 { 00:17:39.374 "name": null, 00:17:39.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.374 "is_configured": false, 00:17:39.374 "data_offset": 0, 00:17:39.374 "data_size": 65536 00:17:39.374 }, 00:17:39.374 { 00:17:39.374 "name": "BaseBdev3", 00:17:39.374 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:39.374 "is_configured": true, 00:17:39.374 "data_offset": 0, 00:17:39.374 "data_size": 65536 00:17:39.374 }, 00:17:39.374 { 00:17:39.375 "name": "BaseBdev4", 00:17:39.375 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:39.375 "is_configured": true, 00:17:39.375 "data_offset": 0, 00:17:39.375 "data_size": 65536 00:17:39.375 } 00:17:39.375 ] 00:17:39.375 }' 00:17:39.375 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.634 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.634 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.634 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.634 19:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.201 [2024-12-05 19:37:33.415340] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:40.201 [2024-12-05 19:37:33.415469] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:40.201 [2024-12-05 19:37:33.415541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.770 "name": "raid_bdev1", 00:17:40.770 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:40.770 "strip_size_kb": 0, 00:17:40.770 "state": "online", 00:17:40.770 "raid_level": "raid1", 00:17:40.770 "superblock": false, 00:17:40.770 "num_base_bdevs": 4, 00:17:40.770 "num_base_bdevs_discovered": 3, 00:17:40.770 "num_base_bdevs_operational": 3, 00:17:40.770 "base_bdevs_list": [ 00:17:40.770 { 00:17:40.770 "name": "spare", 00:17:40.770 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": null, 00:17:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.770 "is_configured": false, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": "BaseBdev3", 00:17:40.770 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": "BaseBdev4", 00:17:40.770 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 } 00:17:40.770 ] 00:17:40.770 }' 00:17:40.770 19:37:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.770 "name": "raid_bdev1", 00:17:40.770 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:40.770 "strip_size_kb": 0, 00:17:40.770 "state": "online", 00:17:40.770 "raid_level": "raid1", 00:17:40.770 "superblock": false, 00:17:40.770 "num_base_bdevs": 4, 00:17:40.770 "num_base_bdevs_discovered": 3, 00:17:40.770 "num_base_bdevs_operational": 3, 00:17:40.770 "base_bdevs_list": [ 00:17:40.770 { 00:17:40.770 "name": "spare", 00:17:40.770 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": null, 00:17:40.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.770 "is_configured": false, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": "BaseBdev3", 00:17:40.770 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 }, 00:17:40.770 { 00:17:40.770 "name": "BaseBdev4", 00:17:40.770 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:40.770 "is_configured": true, 00:17:40.770 "data_offset": 0, 00:17:40.770 "data_size": 65536 00:17:40.770 } 00:17:40.770 ] 00:17:40.770 }' 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.770 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.030 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.030 "name": "raid_bdev1", 00:17:41.030 "uuid": "d559bb54-b95d-42f1-ae63-083d0eb26743", 00:17:41.030 "strip_size_kb": 0, 00:17:41.030 "state": "online", 00:17:41.030 "raid_level": "raid1", 00:17:41.030 "superblock": false, 00:17:41.030 "num_base_bdevs": 4, 00:17:41.030 "num_base_bdevs_discovered": 3, 00:17:41.030 "num_base_bdevs_operational": 3, 00:17:41.030 "base_bdevs_list": [ 00:17:41.030 { 00:17:41.030 "name": "spare", 00:17:41.030 "uuid": "1be460cd-50ff-5a5d-af8b-988ec1900ba1", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 }, 00:17:41.030 { 00:17:41.030 "name": null, 00:17:41.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.030 "is_configured": false, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 }, 00:17:41.030 { 00:17:41.030 "name": "BaseBdev3", 00:17:41.030 "uuid": "001a1ca1-7dd3-50be-a469-83178811584d", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.030 "data_size": 65536 00:17:41.030 }, 00:17:41.030 { 00:17:41.030 "name": "BaseBdev4", 00:17:41.030 "uuid": "a6f94c6b-ee84-5b6d-9089-ef33330cc0d8", 00:17:41.030 "is_configured": true, 00:17:41.030 "data_offset": 0, 00:17:41.031 "data_size": 65536 00:17:41.031 } 00:17:41.031 ] 00:17:41.031 }' 00:17:41.031 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.031 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.599 [2024-12-05 19:37:34.773173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.599 [2024-12-05 19:37:34.773447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.599 [2024-12-05 19:37:34.773596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.599 [2024-12-05 19:37:34.773747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.599 [2024-12-05 19:37:34.773769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.599 19:37:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.858 /dev/nbd0 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.858 1+0 records in 00:17:41.858 1+0 records out 00:17:41.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623389 s, 6.6 MB/s 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.858 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:42.117 /dev/nbd1 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.117 1+0 records in 00:17:42.117 1+0 records out 00:17:42.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456767 s, 9.0 MB/s 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.117 19:37:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.376 19:37:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77785 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77785 ']' 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77785 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.943 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77785 00:17:43.202 killing process with pid 77785 00:17:43.202 Received shutdown signal, test time was about 60.000000 seconds 00:17:43.202 00:17:43.202 Latency(us) 00:17:43.202 [2024-12-05T19:37:36.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.202 [2024-12-05T19:37:36.643Z] =================================================================================================================== 00:17:43.202 [2024-12-05T19:37:36.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.202 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.202 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.202 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77785' 00:17:43.202 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77785 00:17:43.202 [2024-12-05 19:37:36.405537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.202 19:37:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77785 00:17:43.461 [2024-12-05 19:37:36.885703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.844 00:17:44.844 real 0m21.823s 00:17:44.844 user 0m24.578s 00:17:44.844 sys 0m3.916s 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.844 ************************************ 00:17:44.844 END TEST raid_rebuild_test 00:17:44.844 ************************************ 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.844 19:37:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:44.844 19:37:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:44.844 19:37:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.844 19:37:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.844 ************************************ 00:17:44.844 START TEST raid_rebuild_test_sb 00:17:44.844 ************************************ 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:44.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78278 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78278 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78278 ']' 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.844 19:37:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.844 [2024-12-05 19:37:38.208204] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:17:44.845 [2024-12-05 19:37:38.208728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:44.845 Zero copy mechanism will not be used. 00:17:44.845 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78278 ] 00:17:45.104 [2024-12-05 19:37:38.391457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.362 [2024-12-05 19:37:38.553913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.363 [2024-12-05 19:37:38.788613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.363 [2024-12-05 19:37:38.788943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.930 BaseBdev1_malloc 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.930 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 [2024-12-05 19:37:39.260151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.931 [2024-12-05 19:37:39.260234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.931 [2024-12-05 19:37:39.260268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.931 [2024-12-05 19:37:39.260287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.931 [2024-12-05 19:37:39.263126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.931 [2024-12-05 19:37:39.263173] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.931 BaseBdev1 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 BaseBdev2_malloc 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.931 [2024-12-05 19:37:39.316625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:45.931 [2024-12-05 19:37:39.316745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.931 [2024-12-05 19:37:39.316783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:45.931 [2024-12-05 19:37:39.316803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.931 [2024-12-05 19:37:39.319943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.931 [2024-12-05 19:37:39.320000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.931 BaseBdev2 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.931 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 BaseBdev3_malloc 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 [2024-12-05 19:37:39.391726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:46.190 [2024-12-05 19:37:39.391829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.190 [2024-12-05 19:37:39.391889] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:46.190 [2024-12-05 19:37:39.391922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.190 [2024-12-05 19:37:39.395189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.190 [2024-12-05 19:37:39.395239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:46.190 BaseBdev3 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 BaseBdev4_malloc 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 [2024-12-05 19:37:39.453791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:46.190 [2024-12-05 19:37:39.453899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.190 [2024-12-05 19:37:39.453935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:46.190 [2024-12-05 19:37:39.453953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.190 [2024-12-05 19:37:39.457189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.190 [2024-12-05 19:37:39.457240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:46.190 BaseBdev4 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 spare_malloc 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 spare_delay 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.190 [2024-12-05 19:37:39.526818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.190 [2024-12-05 19:37:39.526913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.190 [2024-12-05 19:37:39.526950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:46.190 [2024-12-05 19:37:39.526985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.190 [2024-12-05 19:37:39.530275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.190 [2024-12-05 19:37:39.530367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.190 spare 00:17:46.190 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.191 [2024-12-05 19:37:39.538897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.191 [2024-12-05 19:37:39.541529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.191 [2024-12-05 19:37:39.541632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.191 [2024-12-05 19:37:39.541738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.191 [2024-12-05 19:37:39.542019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:46.191 [2024-12-05 19:37:39.542052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:46.191 [2024-12-05 19:37:39.542424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.191 [2024-12-05 19:37:39.542680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:46.191 [2024-12-05 19:37:39.542737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:46.191 [2024-12-05 19:37:39.543027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.191 "name": "raid_bdev1", 00:17:46.191 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:17:46.191 "strip_size_kb": 0, 00:17:46.191 "state": "online", 00:17:46.191 "raid_level": "raid1", 00:17:46.191 "superblock": true, 00:17:46.191 "num_base_bdevs": 4, 00:17:46.191 "num_base_bdevs_discovered": 4, 00:17:46.191 "num_base_bdevs_operational": 4, 00:17:46.191 "base_bdevs_list": [ 00:17:46.191 { 00:17:46.191 "name": "BaseBdev1", 00:17:46.191 "uuid": "84da60ec-e193-5a0f-b552-e81f4ae11edc", 00:17:46.191 "is_configured": true, 00:17:46.191 "data_offset": 2048, 00:17:46.191 "data_size": 63488 00:17:46.191 }, 00:17:46.191 { 00:17:46.191 "name": "BaseBdev2", 00:17:46.191 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:17:46.191 "is_configured": true, 00:17:46.191 "data_offset": 2048, 00:17:46.191 "data_size": 63488 00:17:46.191 }, 00:17:46.191 { 00:17:46.191 "name": "BaseBdev3", 00:17:46.191 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:17:46.191 "is_configured": true, 00:17:46.191 "data_offset": 2048, 00:17:46.191 "data_size": 63488 00:17:46.191 }, 00:17:46.191 { 00:17:46.191 "name": "BaseBdev4", 00:17:46.191 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:17:46.191 "is_configured": true, 00:17:46.191 "data_offset": 2048, 00:17:46.191 "data_size": 63488 00:17:46.191 } 00:17:46.191 ] 00:17:46.191 }' 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.191 19:37:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.759 [2024-12-05 19:37:40.039854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.759 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:47.017 [2024-12-05 19:37:40.419505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:47.017 /dev/nbd0 00:17:47.017 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.275 1+0 records in 00:17:47.275 1+0 records out 00:17:47.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362323 s, 11.3 MB/s 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:47.275 19:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:57.264 63488+0 records in 00:17:57.264 63488+0 records out 00:17:57.264 32505856 bytes (33 MB, 31 MiB) copied, 8.79321 s, 3.7 MB/s 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.264 [2024-12-05 19:37:49.531560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 [2024-12-05 19:37:49.539639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.264 "name": "raid_bdev1", 00:17:57.264 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:17:57.264 "strip_size_kb": 0, 00:17:57.264 "state": "online", 00:17:57.264 "raid_level": "raid1", 00:17:57.264 "superblock": true, 00:17:57.264 "num_base_bdevs": 4, 00:17:57.264 "num_base_bdevs_discovered": 3, 00:17:57.264 "num_base_bdevs_operational": 3, 00:17:57.264 "base_bdevs_list": [ 00:17:57.264 { 00:17:57.264 "name": null, 00:17:57.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.264 "is_configured": false, 00:17:57.264 "data_offset": 0, 00:17:57.264 "data_size": 63488 00:17:57.264 }, 00:17:57.264 { 00:17:57.264 "name": "BaseBdev2", 00:17:57.264 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:17:57.264 "is_configured": true, 00:17:57.264 "data_offset": 2048, 00:17:57.264 "data_size": 63488 00:17:57.264 }, 00:17:57.264 { 00:17:57.264 "name": "BaseBdev3", 00:17:57.264 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:17:57.264 "is_configured": true, 00:17:57.264 "data_offset": 2048, 00:17:57.264 "data_size": 63488 00:17:57.264 }, 00:17:57.264 { 00:17:57.264 "name": "BaseBdev4", 00:17:57.264 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:17:57.264 "is_configured": true, 00:17:57.264 "data_offset": 2048, 00:17:57.264 "data_size": 63488 00:17:57.264 } 00:17:57.264 ] 00:17:57.264 }' 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.264 19:37:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 19:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.264 19:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.264 19:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 [2024-12-05 19:37:50.079910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.264 [2024-12-05 19:37:50.094664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:57.264 19:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.264 19:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:57.264 [2024-12-05 19:37:50.097316] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.832 "name": "raid_bdev1", 00:17:57.832 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:17:57.832 "strip_size_kb": 0, 00:17:57.832 "state": "online", 00:17:57.832 "raid_level": "raid1", 00:17:57.832 "superblock": true, 00:17:57.832 "num_base_bdevs": 4, 00:17:57.832 "num_base_bdevs_discovered": 4, 00:17:57.832 "num_base_bdevs_operational": 4, 00:17:57.832 "process": { 00:17:57.832 "type": "rebuild", 00:17:57.832 "target": "spare", 00:17:57.832 "progress": { 00:17:57.832 "blocks": 18432, 00:17:57.832 "percent": 29 00:17:57.832 } 00:17:57.832 }, 00:17:57.832 "base_bdevs_list": [ 00:17:57.832 { 00:17:57.832 "name": "spare", 00:17:57.832 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:17:57.832 "is_configured": true, 00:17:57.832 "data_offset": 2048, 00:17:57.832 "data_size": 63488 00:17:57.832 }, 00:17:57.832 { 00:17:57.832 "name": "BaseBdev2", 00:17:57.832 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:17:57.832 "is_configured": true, 00:17:57.832 "data_offset": 2048, 00:17:57.832 "data_size": 63488 00:17:57.832 }, 00:17:57.832 { 00:17:57.832 "name": "BaseBdev3", 00:17:57.832 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:17:57.832 "is_configured": true, 00:17:57.832 "data_offset": 2048, 00:17:57.832 "data_size": 63488 00:17:57.832 }, 00:17:57.832 { 00:17:57.832 "name": "BaseBdev4", 00:17:57.832 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:17:57.832 "is_configured": true, 00:17:57.832 "data_offset": 2048, 00:17:57.832 "data_size": 63488 00:17:57.832 } 00:17:57.832 ] 00:17:57.832 }' 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.832 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.832 [2024-12-05 19:37:51.267495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.092 [2024-12-05 19:37:51.309663] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.092 [2024-12-05 19:37:51.309776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.092 [2024-12-05 19:37:51.309804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.092 [2024-12-05 19:37:51.309820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.092 "name": "raid_bdev1", 00:17:58.092 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:17:58.092 "strip_size_kb": 0, 00:17:58.092 "state": "online", 00:17:58.092 "raid_level": "raid1", 00:17:58.092 "superblock": true, 00:17:58.092 "num_base_bdevs": 4, 00:17:58.092 "num_base_bdevs_discovered": 3, 00:17:58.092 "num_base_bdevs_operational": 3, 00:17:58.092 "base_bdevs_list": [ 00:17:58.092 { 00:17:58.092 "name": null, 00:17:58.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.092 "is_configured": false, 00:17:58.092 "data_offset": 0, 00:17:58.092 "data_size": 63488 00:17:58.092 }, 00:17:58.092 { 00:17:58.092 "name": "BaseBdev2", 00:17:58.092 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:17:58.092 "is_configured": true, 00:17:58.092 "data_offset": 2048, 00:17:58.092 "data_size": 63488 00:17:58.092 }, 00:17:58.092 { 00:17:58.092 "name": "BaseBdev3", 00:17:58.092 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:17:58.092 "is_configured": true, 00:17:58.092 "data_offset": 2048, 00:17:58.092 "data_size": 63488 00:17:58.092 }, 00:17:58.092 { 00:17:58.092 "name": "BaseBdev4", 00:17:58.092 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:17:58.092 "is_configured": true, 00:17:58.092 "data_offset": 2048, 00:17:58.092 "data_size": 63488 00:17:58.092 } 00:17:58.092 ] 00:17:58.092 }' 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.092 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.660 "name": "raid_bdev1", 00:17:58.660 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:17:58.660 "strip_size_kb": 0, 00:17:58.660 "state": "online", 00:17:58.660 "raid_level": "raid1", 00:17:58.660 "superblock": true, 00:17:58.660 "num_base_bdevs": 4, 00:17:58.660 "num_base_bdevs_discovered": 3, 00:17:58.660 "num_base_bdevs_operational": 3, 00:17:58.660 "base_bdevs_list": [ 00:17:58.660 { 00:17:58.660 "name": null, 00:17:58.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.660 "is_configured": false, 00:17:58.660 "data_offset": 0, 00:17:58.660 "data_size": 63488 00:17:58.660 }, 00:17:58.660 { 00:17:58.660 "name": "BaseBdev2", 00:17:58.660 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:17:58.660 "is_configured": true, 00:17:58.660 "data_offset": 2048, 00:17:58.660 "data_size": 63488 00:17:58.660 }, 00:17:58.660 { 00:17:58.660 "name": "BaseBdev3", 00:17:58.660 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:17:58.660 "is_configured": true, 00:17:58.660 "data_offset": 2048, 00:17:58.660 "data_size": 63488 00:17:58.660 }, 00:17:58.660 { 00:17:58.660 "name": "BaseBdev4", 00:17:58.660 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:17:58.660 "is_configured": true, 00:17:58.660 "data_offset": 2048, 00:17:58.660 "data_size": 63488 00:17:58.660 } 00:17:58.660 ] 00:17:58.660 }' 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.660 19:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.660 [2024-12-05 19:37:52.039692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.660 [2024-12-05 19:37:52.054229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.660 19:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.660 [2024-12-05 19:37:52.057093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.037 "name": "raid_bdev1", 00:18:00.037 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:00.037 "strip_size_kb": 0, 00:18:00.037 "state": "online", 00:18:00.037 "raid_level": "raid1", 00:18:00.037 "superblock": true, 00:18:00.037 "num_base_bdevs": 4, 00:18:00.037 "num_base_bdevs_discovered": 4, 00:18:00.037 "num_base_bdevs_operational": 4, 00:18:00.037 "process": { 00:18:00.037 "type": "rebuild", 00:18:00.037 "target": "spare", 00:18:00.037 "progress": { 00:18:00.037 "blocks": 20480, 00:18:00.037 "percent": 32 00:18:00.037 } 00:18:00.037 }, 00:18:00.037 "base_bdevs_list": [ 00:18:00.037 { 00:18:00.037 "name": "spare", 00:18:00.037 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:00.037 "is_configured": true, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": "BaseBdev2", 00:18:00.037 "uuid": "c3f4d229-4b93-5dd7-a212-c89427ab0bac", 00:18:00.037 "is_configured": true, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": "BaseBdev3", 00:18:00.037 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:00.037 "is_configured": true, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": "BaseBdev4", 00:18:00.037 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:00.037 "is_configured": true, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 } 00:18:00.037 ] 00:18:00.037 }' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:00.037 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.037 [2024-12-05 19:37:53.223074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.037 [2024-12-05 19:37:53.369040] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.037 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.037 "name": "raid_bdev1", 00:18:00.037 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:00.037 "strip_size_kb": 0, 00:18:00.037 "state": "online", 00:18:00.037 "raid_level": "raid1", 00:18:00.037 "superblock": true, 00:18:00.037 "num_base_bdevs": 4, 00:18:00.037 "num_base_bdevs_discovered": 3, 00:18:00.037 "num_base_bdevs_operational": 3, 00:18:00.037 "process": { 00:18:00.037 "type": "rebuild", 00:18:00.037 "target": "spare", 00:18:00.037 "progress": { 00:18:00.037 "blocks": 24576, 00:18:00.037 "percent": 38 00:18:00.038 } 00:18:00.038 }, 00:18:00.038 "base_bdevs_list": [ 00:18:00.038 { 00:18:00.038 "name": "spare", 00:18:00.038 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:00.038 "is_configured": true, 00:18:00.038 "data_offset": 2048, 00:18:00.038 "data_size": 63488 00:18:00.038 }, 00:18:00.038 { 00:18:00.038 "name": null, 00:18:00.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.038 "is_configured": false, 00:18:00.038 "data_offset": 0, 00:18:00.038 "data_size": 63488 00:18:00.038 }, 00:18:00.038 { 00:18:00.038 "name": "BaseBdev3", 00:18:00.038 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:00.038 "is_configured": true, 00:18:00.038 "data_offset": 2048, 00:18:00.038 "data_size": 63488 00:18:00.038 }, 00:18:00.038 { 00:18:00.038 "name": "BaseBdev4", 00:18:00.038 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:00.038 "is_configured": true, 00:18:00.038 "data_offset": 2048, 00:18:00.038 "data_size": 63488 00:18:00.038 } 00:18:00.038 ] 00:18:00.038 }' 00:18:00.038 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.297 "name": "raid_bdev1", 00:18:00.297 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:00.297 "strip_size_kb": 0, 00:18:00.297 "state": "online", 00:18:00.297 "raid_level": "raid1", 00:18:00.297 "superblock": true, 00:18:00.297 "num_base_bdevs": 4, 00:18:00.297 "num_base_bdevs_discovered": 3, 00:18:00.297 "num_base_bdevs_operational": 3, 00:18:00.297 "process": { 00:18:00.297 "type": "rebuild", 00:18:00.297 "target": "spare", 00:18:00.297 "progress": { 00:18:00.297 "blocks": 26624, 00:18:00.297 "percent": 41 00:18:00.297 } 00:18:00.297 }, 00:18:00.297 "base_bdevs_list": [ 00:18:00.297 { 00:18:00.297 "name": "spare", 00:18:00.297 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:00.297 "is_configured": true, 00:18:00.297 "data_offset": 2048, 00:18:00.297 "data_size": 63488 00:18:00.297 }, 00:18:00.297 { 00:18:00.297 "name": null, 00:18:00.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.297 "is_configured": false, 00:18:00.297 "data_offset": 0, 00:18:00.297 "data_size": 63488 00:18:00.297 }, 00:18:00.297 { 00:18:00.297 "name": "BaseBdev3", 00:18:00.297 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:00.297 "is_configured": true, 00:18:00.297 "data_offset": 2048, 00:18:00.297 "data_size": 63488 00:18:00.297 }, 00:18:00.297 { 00:18:00.297 "name": "BaseBdev4", 00:18:00.297 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:00.297 "is_configured": true, 00:18:00.297 "data_offset": 2048, 00:18:00.297 "data_size": 63488 00:18:00.297 } 00:18:00.297 ] 00:18:00.297 }' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.297 19:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.672 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.672 "name": "raid_bdev1", 00:18:01.672 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:01.672 "strip_size_kb": 0, 00:18:01.672 "state": "online", 00:18:01.672 "raid_level": "raid1", 00:18:01.672 "superblock": true, 00:18:01.672 "num_base_bdevs": 4, 00:18:01.672 "num_base_bdevs_discovered": 3, 00:18:01.672 "num_base_bdevs_operational": 3, 00:18:01.672 "process": { 00:18:01.672 "type": "rebuild", 00:18:01.672 "target": "spare", 00:18:01.672 "progress": { 00:18:01.672 "blocks": 51200, 00:18:01.672 "percent": 80 00:18:01.672 } 00:18:01.672 }, 00:18:01.672 "base_bdevs_list": [ 00:18:01.672 { 00:18:01.672 "name": "spare", 00:18:01.672 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:01.672 "is_configured": true, 00:18:01.672 "data_offset": 2048, 00:18:01.672 "data_size": 63488 00:18:01.672 }, 00:18:01.672 { 00:18:01.672 "name": null, 00:18:01.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.672 "is_configured": false, 00:18:01.672 "data_offset": 0, 00:18:01.672 "data_size": 63488 00:18:01.672 }, 00:18:01.672 { 00:18:01.672 "name": "BaseBdev3", 00:18:01.673 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:01.673 "is_configured": true, 00:18:01.673 "data_offset": 2048, 00:18:01.673 "data_size": 63488 00:18:01.673 }, 00:18:01.673 { 00:18:01.673 "name": "BaseBdev4", 00:18:01.673 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:01.673 "is_configured": true, 00:18:01.673 "data_offset": 2048, 00:18:01.673 "data_size": 63488 00:18:01.673 } 00:18:01.673 ] 00:18:01.673 }' 00:18:01.673 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.673 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.673 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.673 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.673 19:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.931 [2024-12-05 19:37:55.286043] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.931 [2024-12-05 19:37:55.286141] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.931 [2024-12-05 19:37:55.286306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.497 "name": "raid_bdev1", 00:18:02.497 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:02.497 "strip_size_kb": 0, 00:18:02.497 "state": "online", 00:18:02.497 "raid_level": "raid1", 00:18:02.497 "superblock": true, 00:18:02.497 "num_base_bdevs": 4, 00:18:02.497 "num_base_bdevs_discovered": 3, 00:18:02.497 "num_base_bdevs_operational": 3, 00:18:02.497 "base_bdevs_list": [ 00:18:02.497 { 00:18:02.497 "name": "spare", 00:18:02.497 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:02.497 "is_configured": true, 00:18:02.497 "data_offset": 2048, 00:18:02.497 "data_size": 63488 00:18:02.497 }, 00:18:02.497 { 00:18:02.497 "name": null, 00:18:02.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.497 "is_configured": false, 00:18:02.497 "data_offset": 0, 00:18:02.497 "data_size": 63488 00:18:02.497 }, 00:18:02.497 { 00:18:02.497 "name": "BaseBdev3", 00:18:02.497 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:02.497 "is_configured": true, 00:18:02.497 "data_offset": 2048, 00:18:02.497 "data_size": 63488 00:18:02.497 }, 00:18:02.497 { 00:18:02.497 "name": "BaseBdev4", 00:18:02.497 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:02.497 "is_configured": true, 00:18:02.497 "data_offset": 2048, 00:18:02.497 "data_size": 63488 00:18:02.497 } 00:18:02.497 ] 00:18:02.497 }' 00:18:02.497 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.756 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.756 19:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.756 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.757 "name": "raid_bdev1", 00:18:02.757 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:02.757 "strip_size_kb": 0, 00:18:02.757 "state": "online", 00:18:02.757 "raid_level": "raid1", 00:18:02.757 "superblock": true, 00:18:02.757 "num_base_bdevs": 4, 00:18:02.757 "num_base_bdevs_discovered": 3, 00:18:02.757 "num_base_bdevs_operational": 3, 00:18:02.757 "base_bdevs_list": [ 00:18:02.757 { 00:18:02.757 "name": "spare", 00:18:02.757 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:02.757 "is_configured": true, 00:18:02.757 "data_offset": 2048, 00:18:02.757 "data_size": 63488 00:18:02.757 }, 00:18:02.757 { 00:18:02.757 "name": null, 00:18:02.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.757 "is_configured": false, 00:18:02.757 "data_offset": 0, 00:18:02.757 "data_size": 63488 00:18:02.757 }, 00:18:02.757 { 00:18:02.757 "name": "BaseBdev3", 00:18:02.757 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:02.757 "is_configured": true, 00:18:02.757 "data_offset": 2048, 00:18:02.757 "data_size": 63488 00:18:02.757 }, 00:18:02.757 { 00:18:02.757 "name": "BaseBdev4", 00:18:02.757 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:02.757 "is_configured": true, 00:18:02.757 "data_offset": 2048, 00:18:02.757 "data_size": 63488 00:18:02.757 } 00:18:02.757 ] 00:18:02.757 }' 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.757 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.015 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.016 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.016 "name": "raid_bdev1", 00:18:03.016 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:03.016 "strip_size_kb": 0, 00:18:03.016 "state": "online", 00:18:03.016 "raid_level": "raid1", 00:18:03.016 "superblock": true, 00:18:03.016 "num_base_bdevs": 4, 00:18:03.016 "num_base_bdevs_discovered": 3, 00:18:03.016 "num_base_bdevs_operational": 3, 00:18:03.016 "base_bdevs_list": [ 00:18:03.016 { 00:18:03.016 "name": "spare", 00:18:03.016 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:03.016 "is_configured": true, 00:18:03.016 "data_offset": 2048, 00:18:03.016 "data_size": 63488 00:18:03.016 }, 00:18:03.016 { 00:18:03.016 "name": null, 00:18:03.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.016 "is_configured": false, 00:18:03.016 "data_offset": 0, 00:18:03.016 "data_size": 63488 00:18:03.016 }, 00:18:03.016 { 00:18:03.016 "name": "BaseBdev3", 00:18:03.016 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:03.016 "is_configured": true, 00:18:03.016 "data_offset": 2048, 00:18:03.016 "data_size": 63488 00:18:03.016 }, 00:18:03.016 { 00:18:03.016 "name": "BaseBdev4", 00:18:03.016 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:03.016 "is_configured": true, 00:18:03.016 "data_offset": 2048, 00:18:03.016 "data_size": 63488 00:18:03.016 } 00:18:03.016 ] 00:18:03.016 }' 00:18:03.016 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.016 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 [2024-12-05 19:37:56.702197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.334 [2024-12-05 19:37:56.702259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.334 [2024-12-05 19:37:56.702366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.334 [2024-12-05 19:37:56.702495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.334 [2024-12-05 19:37:56.702529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.334 19:37:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.610 19:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:03.869 /dev/nbd0 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.869 1+0 records in 00:18:03.869 1+0 records out 00:18:03.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325089 s, 12.6 MB/s 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:03.869 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:04.128 /dev/nbd1 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.128 1+0 records in 00:18:04.128 1+0 records out 00:18:04.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041245 s, 9.9 MB/s 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.128 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.386 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.644 19:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:04.902 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:04.902 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 [2024-12-05 19:37:58.201276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.903 [2024-12-05 19:37:58.201361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.903 [2024-12-05 19:37:58.201397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:04.903 [2024-12-05 19:37:58.201412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.903 [2024-12-05 19:37:58.204403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.903 [2024-12-05 19:37:58.204465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.903 [2024-12-05 19:37:58.204579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:04.903 [2024-12-05 19:37:58.204653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.903 [2024-12-05 19:37:58.204846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.903 [2024-12-05 19:37:58.205004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:04.903 spare 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 [2024-12-05 19:37:58.305160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:04.903 [2024-12-05 19:37:58.305195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.903 [2024-12-05 19:37:58.305556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:04.903 [2024-12-05 19:37:58.305826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:04.903 [2024-12-05 19:37:58.305849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:04.903 [2024-12-05 19:37:58.306058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.903 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.160 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.160 "name": "raid_bdev1", 00:18:05.160 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:05.160 "strip_size_kb": 0, 00:18:05.160 "state": "online", 00:18:05.160 "raid_level": "raid1", 00:18:05.160 "superblock": true, 00:18:05.160 "num_base_bdevs": 4, 00:18:05.160 "num_base_bdevs_discovered": 3, 00:18:05.160 "num_base_bdevs_operational": 3, 00:18:05.160 "base_bdevs_list": [ 00:18:05.160 { 00:18:05.160 "name": "spare", 00:18:05.160 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:05.160 "is_configured": true, 00:18:05.160 "data_offset": 2048, 00:18:05.160 "data_size": 63488 00:18:05.160 }, 00:18:05.160 { 00:18:05.160 "name": null, 00:18:05.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.160 "is_configured": false, 00:18:05.160 "data_offset": 2048, 00:18:05.160 "data_size": 63488 00:18:05.160 }, 00:18:05.160 { 00:18:05.160 "name": "BaseBdev3", 00:18:05.160 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:05.160 "is_configured": true, 00:18:05.160 "data_offset": 2048, 00:18:05.160 "data_size": 63488 00:18:05.160 }, 00:18:05.160 { 00:18:05.160 "name": "BaseBdev4", 00:18:05.160 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:05.160 "is_configured": true, 00:18:05.160 "data_offset": 2048, 00:18:05.160 "data_size": 63488 00:18:05.160 } 00:18:05.160 ] 00:18:05.160 }' 00:18:05.160 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.160 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.417 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.675 "name": "raid_bdev1", 00:18:05.675 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:05.675 "strip_size_kb": 0, 00:18:05.675 "state": "online", 00:18:05.675 "raid_level": "raid1", 00:18:05.675 "superblock": true, 00:18:05.675 "num_base_bdevs": 4, 00:18:05.675 "num_base_bdevs_discovered": 3, 00:18:05.675 "num_base_bdevs_operational": 3, 00:18:05.675 "base_bdevs_list": [ 00:18:05.675 { 00:18:05.675 "name": "spare", 00:18:05.675 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:05.675 "is_configured": true, 00:18:05.675 "data_offset": 2048, 00:18:05.675 "data_size": 63488 00:18:05.675 }, 00:18:05.675 { 00:18:05.675 "name": null, 00:18:05.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.675 "is_configured": false, 00:18:05.675 "data_offset": 2048, 00:18:05.675 "data_size": 63488 00:18:05.675 }, 00:18:05.675 { 00:18:05.675 "name": "BaseBdev3", 00:18:05.675 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:05.675 "is_configured": true, 00:18:05.675 "data_offset": 2048, 00:18:05.675 "data_size": 63488 00:18:05.675 }, 00:18:05.675 { 00:18:05.675 "name": "BaseBdev4", 00:18:05.675 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:05.675 "is_configured": true, 00:18:05.675 "data_offset": 2048, 00:18:05.675 "data_size": 63488 00:18:05.675 } 00:18:05.675 ] 00:18:05.675 }' 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.675 19:37:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.675 [2024-12-05 19:37:59.034228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.675 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.676 "name": "raid_bdev1", 00:18:05.676 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:05.676 "strip_size_kb": 0, 00:18:05.676 "state": "online", 00:18:05.676 "raid_level": "raid1", 00:18:05.676 "superblock": true, 00:18:05.676 "num_base_bdevs": 4, 00:18:05.676 "num_base_bdevs_discovered": 2, 00:18:05.676 "num_base_bdevs_operational": 2, 00:18:05.676 "base_bdevs_list": [ 00:18:05.676 { 00:18:05.676 "name": null, 00:18:05.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.676 "is_configured": false, 00:18:05.676 "data_offset": 0, 00:18:05.676 "data_size": 63488 00:18:05.676 }, 00:18:05.676 { 00:18:05.676 "name": null, 00:18:05.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.676 "is_configured": false, 00:18:05.676 "data_offset": 2048, 00:18:05.676 "data_size": 63488 00:18:05.676 }, 00:18:05.676 { 00:18:05.676 "name": "BaseBdev3", 00:18:05.676 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:05.676 "is_configured": true, 00:18:05.676 "data_offset": 2048, 00:18:05.676 "data_size": 63488 00:18:05.676 }, 00:18:05.676 { 00:18:05.676 "name": "BaseBdev4", 00:18:05.676 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:05.676 "is_configured": true, 00:18:05.676 "data_offset": 2048, 00:18:05.676 "data_size": 63488 00:18:05.676 } 00:18:05.676 ] 00:18:05.676 }' 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.676 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.240 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.240 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.240 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.240 [2024-12-05 19:37:59.570427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.240 [2024-12-05 19:37:59.570737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:06.240 [2024-12-05 19:37:59.570781] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.240 [2024-12-05 19:37:59.570831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.240 [2024-12-05 19:37:59.584373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:06.240 19:37:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.240 19:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:06.240 [2024-12-05 19:37:59.586883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.169 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.426 "name": "raid_bdev1", 00:18:07.426 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:07.426 "strip_size_kb": 0, 00:18:07.426 "state": "online", 00:18:07.426 "raid_level": "raid1", 00:18:07.426 "superblock": true, 00:18:07.426 "num_base_bdevs": 4, 00:18:07.426 "num_base_bdevs_discovered": 3, 00:18:07.426 "num_base_bdevs_operational": 3, 00:18:07.426 "process": { 00:18:07.426 "type": "rebuild", 00:18:07.426 "target": "spare", 00:18:07.426 "progress": { 00:18:07.426 "blocks": 20480, 00:18:07.426 "percent": 32 00:18:07.426 } 00:18:07.426 }, 00:18:07.426 "base_bdevs_list": [ 00:18:07.426 { 00:18:07.426 "name": "spare", 00:18:07.426 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:07.426 "is_configured": true, 00:18:07.426 "data_offset": 2048, 00:18:07.426 "data_size": 63488 00:18:07.426 }, 00:18:07.426 { 00:18:07.426 "name": null, 00:18:07.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.426 "is_configured": false, 00:18:07.426 "data_offset": 2048, 00:18:07.426 "data_size": 63488 00:18:07.426 }, 00:18:07.426 { 00:18:07.426 "name": "BaseBdev3", 00:18:07.426 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:07.426 "is_configured": true, 00:18:07.426 "data_offset": 2048, 00:18:07.426 "data_size": 63488 00:18:07.426 }, 00:18:07.426 { 00:18:07.426 "name": "BaseBdev4", 00:18:07.426 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:07.426 "is_configured": true, 00:18:07.426 "data_offset": 2048, 00:18:07.426 "data_size": 63488 00:18:07.426 } 00:18:07.426 ] 00:18:07.426 }' 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.426 [2024-12-05 19:38:00.760267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.426 [2024-12-05 19:38:00.795270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.426 [2024-12-05 19:38:00.795353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.426 [2024-12-05 19:38:00.795384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.426 [2024-12-05 19:38:00.795396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.426 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.683 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.683 "name": "raid_bdev1", 00:18:07.684 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:07.684 "strip_size_kb": 0, 00:18:07.684 "state": "online", 00:18:07.684 "raid_level": "raid1", 00:18:07.684 "superblock": true, 00:18:07.684 "num_base_bdevs": 4, 00:18:07.684 "num_base_bdevs_discovered": 2, 00:18:07.684 "num_base_bdevs_operational": 2, 00:18:07.684 "base_bdevs_list": [ 00:18:07.684 { 00:18:07.684 "name": null, 00:18:07.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.684 "is_configured": false, 00:18:07.684 "data_offset": 0, 00:18:07.684 "data_size": 63488 00:18:07.684 }, 00:18:07.684 { 00:18:07.684 "name": null, 00:18:07.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.684 "is_configured": false, 00:18:07.684 "data_offset": 2048, 00:18:07.684 "data_size": 63488 00:18:07.684 }, 00:18:07.684 { 00:18:07.684 "name": "BaseBdev3", 00:18:07.684 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:07.684 "is_configured": true, 00:18:07.684 "data_offset": 2048, 00:18:07.684 "data_size": 63488 00:18:07.684 }, 00:18:07.684 { 00:18:07.684 "name": "BaseBdev4", 00:18:07.684 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:07.684 "is_configured": true, 00:18:07.684 "data_offset": 2048, 00:18:07.684 "data_size": 63488 00:18:07.684 } 00:18:07.684 ] 00:18:07.684 }' 00:18:07.684 19:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.684 19:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.942 19:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:07.942 19:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.942 19:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.942 [2024-12-05 19:38:01.347222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:07.942 [2024-12-05 19:38:01.347328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.942 [2024-12-05 19:38:01.347372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:07.942 [2024-12-05 19:38:01.347389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.942 [2024-12-05 19:38:01.348034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.942 [2024-12-05 19:38:01.348084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:07.942 [2024-12-05 19:38:01.348213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:07.942 [2024-12-05 19:38:01.348233] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:07.942 [2024-12-05 19:38:01.348249] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.942 [2024-12-05 19:38:01.348301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.942 [2024-12-05 19:38:01.361908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:07.942 spare 00:18:07.942 19:38:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.942 19:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:07.942 [2024-12-05 19:38:01.364388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.313 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.314 "name": "raid_bdev1", 00:18:09.314 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:09.314 "strip_size_kb": 0, 00:18:09.314 "state": "online", 00:18:09.314 "raid_level": "raid1", 00:18:09.314 "superblock": true, 00:18:09.314 "num_base_bdevs": 4, 00:18:09.314 "num_base_bdevs_discovered": 3, 00:18:09.314 "num_base_bdevs_operational": 3, 00:18:09.314 "process": { 00:18:09.314 "type": "rebuild", 00:18:09.314 "target": "spare", 00:18:09.314 "progress": { 00:18:09.314 "blocks": 20480, 00:18:09.314 "percent": 32 00:18:09.314 } 00:18:09.314 }, 00:18:09.314 "base_bdevs_list": [ 00:18:09.314 { 00:18:09.314 "name": "spare", 00:18:09.314 "uuid": "6a309532-7199-59cd-b08c-9e9a2402412d", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": "BaseBdev3", 00:18:09.314 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": "BaseBdev4", 00:18:09.314 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 } 00:18:09.314 ] 00:18:09.314 }' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.314 [2024-12-05 19:38:02.529725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.314 [2024-12-05 19:38:02.572903] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.314 [2024-12-05 19:38:02.572982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.314 [2024-12-05 19:38:02.573017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.314 [2024-12-05 19:38:02.573033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.314 "name": "raid_bdev1", 00:18:09.314 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:09.314 "strip_size_kb": 0, 00:18:09.314 "state": "online", 00:18:09.314 "raid_level": "raid1", 00:18:09.314 "superblock": true, 00:18:09.314 "num_base_bdevs": 4, 00:18:09.314 "num_base_bdevs_discovered": 2, 00:18:09.314 "num_base_bdevs_operational": 2, 00:18:09.314 "base_bdevs_list": [ 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 0, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": "BaseBdev3", 00:18:09.314 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": "BaseBdev4", 00:18:09.314 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 } 00:18:09.314 ] 00:18:09.314 }' 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.314 19:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.889 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.889 "name": "raid_bdev1", 00:18:09.889 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:09.889 "strip_size_kb": 0, 00:18:09.889 "state": "online", 00:18:09.889 "raid_level": "raid1", 00:18:09.889 "superblock": true, 00:18:09.889 "num_base_bdevs": 4, 00:18:09.889 "num_base_bdevs_discovered": 2, 00:18:09.889 "num_base_bdevs_operational": 2, 00:18:09.889 "base_bdevs_list": [ 00:18:09.889 { 00:18:09.889 "name": null, 00:18:09.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.889 "is_configured": false, 00:18:09.889 "data_offset": 0, 00:18:09.889 "data_size": 63488 00:18:09.889 }, 00:18:09.889 { 00:18:09.889 "name": null, 00:18:09.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.889 "is_configured": false, 00:18:09.889 "data_offset": 2048, 00:18:09.889 "data_size": 63488 00:18:09.889 }, 00:18:09.889 { 00:18:09.889 "name": "BaseBdev3", 00:18:09.889 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:09.889 "is_configured": true, 00:18:09.889 "data_offset": 2048, 00:18:09.889 "data_size": 63488 00:18:09.889 }, 00:18:09.889 { 00:18:09.890 "name": "BaseBdev4", 00:18:09.890 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:09.890 "is_configured": true, 00:18:09.890 "data_offset": 2048, 00:18:09.890 "data_size": 63488 00:18:09.890 } 00:18:09.890 ] 00:18:09.890 }' 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.890 [2024-12-05 19:38:03.281007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:09.890 [2024-12-05 19:38:03.281079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.890 [2024-12-05 19:38:03.281109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:09.890 [2024-12-05 19:38:03.281126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.890 [2024-12-05 19:38:03.281690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.890 [2024-12-05 19:38:03.281749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.890 [2024-12-05 19:38:03.281854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:09.890 [2024-12-05 19:38:03.281881] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:09.890 [2024-12-05 19:38:03.281893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.890 [2024-12-05 19:38:03.281926] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:09.890 BaseBdev1 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.890 19:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.267 "name": "raid_bdev1", 00:18:11.267 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:11.267 "strip_size_kb": 0, 00:18:11.267 "state": "online", 00:18:11.267 "raid_level": "raid1", 00:18:11.267 "superblock": true, 00:18:11.267 "num_base_bdevs": 4, 00:18:11.267 "num_base_bdevs_discovered": 2, 00:18:11.267 "num_base_bdevs_operational": 2, 00:18:11.267 "base_bdevs_list": [ 00:18:11.267 { 00:18:11.267 "name": null, 00:18:11.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.267 "is_configured": false, 00:18:11.267 "data_offset": 0, 00:18:11.267 "data_size": 63488 00:18:11.267 }, 00:18:11.267 { 00:18:11.267 "name": null, 00:18:11.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.267 "is_configured": false, 00:18:11.267 "data_offset": 2048, 00:18:11.267 "data_size": 63488 00:18:11.267 }, 00:18:11.267 { 00:18:11.267 "name": "BaseBdev3", 00:18:11.267 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:11.267 "is_configured": true, 00:18:11.267 "data_offset": 2048, 00:18:11.267 "data_size": 63488 00:18:11.267 }, 00:18:11.267 { 00:18:11.267 "name": "BaseBdev4", 00:18:11.267 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:11.267 "is_configured": true, 00:18:11.267 "data_offset": 2048, 00:18:11.267 "data_size": 63488 00:18:11.267 } 00:18:11.267 ] 00:18:11.267 }' 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.267 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.526 "name": "raid_bdev1", 00:18:11.526 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:11.526 "strip_size_kb": 0, 00:18:11.526 "state": "online", 00:18:11.526 "raid_level": "raid1", 00:18:11.526 "superblock": true, 00:18:11.526 "num_base_bdevs": 4, 00:18:11.526 "num_base_bdevs_discovered": 2, 00:18:11.526 "num_base_bdevs_operational": 2, 00:18:11.526 "base_bdevs_list": [ 00:18:11.526 { 00:18:11.526 "name": null, 00:18:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.526 "is_configured": false, 00:18:11.526 "data_offset": 0, 00:18:11.526 "data_size": 63488 00:18:11.526 }, 00:18:11.526 { 00:18:11.526 "name": null, 00:18:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.526 "is_configured": false, 00:18:11.526 "data_offset": 2048, 00:18:11.526 "data_size": 63488 00:18:11.526 }, 00:18:11.526 { 00:18:11.526 "name": "BaseBdev3", 00:18:11.526 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:11.526 "is_configured": true, 00:18:11.526 "data_offset": 2048, 00:18:11.526 "data_size": 63488 00:18:11.526 }, 00:18:11.526 { 00:18:11.526 "name": "BaseBdev4", 00:18:11.526 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:11.526 "is_configured": true, 00:18:11.526 "data_offset": 2048, 00:18:11.526 "data_size": 63488 00:18:11.526 } 00:18:11.526 ] 00:18:11.526 }' 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.526 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.784 [2024-12-05 19:38:04.969585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.784 [2024-12-05 19:38:04.969866] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:11.784 [2024-12-05 19:38:04.969889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:11.784 request: 00:18:11.784 { 00:18:11.784 "base_bdev": "BaseBdev1", 00:18:11.784 "raid_bdev": "raid_bdev1", 00:18:11.784 "method": "bdev_raid_add_base_bdev", 00:18:11.784 "req_id": 1 00:18:11.784 } 00:18:11.784 Got JSON-RPC error response 00:18:11.784 response: 00:18:11.784 { 00:18:11.784 "code": -22, 00:18:11.784 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:11.784 } 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.784 19:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.719 19:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.719 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.719 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.720 "name": "raid_bdev1", 00:18:12.720 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:12.720 "strip_size_kb": 0, 00:18:12.720 "state": "online", 00:18:12.720 "raid_level": "raid1", 00:18:12.720 "superblock": true, 00:18:12.720 "num_base_bdevs": 4, 00:18:12.720 "num_base_bdevs_discovered": 2, 00:18:12.720 "num_base_bdevs_operational": 2, 00:18:12.720 "base_bdevs_list": [ 00:18:12.720 { 00:18:12.720 "name": null, 00:18:12.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.720 "is_configured": false, 00:18:12.720 "data_offset": 0, 00:18:12.720 "data_size": 63488 00:18:12.720 }, 00:18:12.720 { 00:18:12.720 "name": null, 00:18:12.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.720 "is_configured": false, 00:18:12.720 "data_offset": 2048, 00:18:12.720 "data_size": 63488 00:18:12.720 }, 00:18:12.720 { 00:18:12.720 "name": "BaseBdev3", 00:18:12.720 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:12.720 "is_configured": true, 00:18:12.720 "data_offset": 2048, 00:18:12.720 "data_size": 63488 00:18:12.720 }, 00:18:12.720 { 00:18:12.720 "name": "BaseBdev4", 00:18:12.720 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:12.720 "is_configured": true, 00:18:12.720 "data_offset": 2048, 00:18:12.720 "data_size": 63488 00:18:12.720 } 00:18:12.720 ] 00:18:12.720 }' 00:18:12.720 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.720 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.288 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.288 "name": "raid_bdev1", 00:18:13.288 "uuid": "dda97e53-fca7-402c-b564-cd235ae92ebf", 00:18:13.288 "strip_size_kb": 0, 00:18:13.288 "state": "online", 00:18:13.288 "raid_level": "raid1", 00:18:13.288 "superblock": true, 00:18:13.288 "num_base_bdevs": 4, 00:18:13.288 "num_base_bdevs_discovered": 2, 00:18:13.288 "num_base_bdevs_operational": 2, 00:18:13.288 "base_bdevs_list": [ 00:18:13.288 { 00:18:13.288 "name": null, 00:18:13.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.288 "is_configured": false, 00:18:13.288 "data_offset": 0, 00:18:13.288 "data_size": 63488 00:18:13.288 }, 00:18:13.289 { 00:18:13.289 "name": null, 00:18:13.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.289 "is_configured": false, 00:18:13.289 "data_offset": 2048, 00:18:13.289 "data_size": 63488 00:18:13.289 }, 00:18:13.289 { 00:18:13.289 "name": "BaseBdev3", 00:18:13.289 "uuid": "e0675ec5-48c0-5f90-bb1b-4f6ce91f4c05", 00:18:13.289 "is_configured": true, 00:18:13.289 "data_offset": 2048, 00:18:13.289 "data_size": 63488 00:18:13.289 }, 00:18:13.289 { 00:18:13.289 "name": "BaseBdev4", 00:18:13.289 "uuid": "63f32e34-c9f5-54af-af33-de9da8bcf27d", 00:18:13.289 "is_configured": true, 00:18:13.289 "data_offset": 2048, 00:18:13.289 "data_size": 63488 00:18:13.289 } 00:18:13.289 ] 00:18:13.289 }' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78278 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78278 ']' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78278 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78278 00:18:13.289 killing process with pid 78278 00:18:13.289 Received shutdown signal, test time was about 60.000000 seconds 00:18:13.289 00:18:13.289 Latency(us) 00:18:13.289 [2024-12-05T19:38:06.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.289 [2024-12-05T19:38:06.730Z] =================================================================================================================== 00:18:13.289 [2024-12-05T19:38:06.730Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78278' 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78278 00:18:13.289 [2024-12-05 19:38:06.687462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.289 19:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78278 00:18:13.289 [2024-12-05 19:38:06.687616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.289 [2024-12-05 19:38:06.687737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.289 [2024-12-05 19:38:06.687756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:13.856 [2024-12-05 19:38:07.121051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.791 19:38:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:14.791 00:18:14.791 real 0m30.092s 00:18:14.791 user 0m36.390s 00:18:14.791 sys 0m4.482s 00:18:14.791 19:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.791 19:38:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.791 ************************************ 00:18:14.791 END TEST raid_rebuild_test_sb 00:18:14.791 ************************************ 00:18:15.050 19:38:08 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:15.050 19:38:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:15.050 19:38:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.050 19:38:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.050 ************************************ 00:18:15.050 START TEST raid_rebuild_test_io 00:18:15.050 ************************************ 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79076 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79076 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79076 ']' 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.050 19:38:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.050 Zero copy mechanism will not be used. 00:18:15.050 [2024-12-05 19:38:08.373262] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:18:15.050 [2024-12-05 19:38:08.373481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:18:15.309 [2024-12-05 19:38:08.562896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.309 [2024-12-05 19:38:08.694631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.567 [2024-12-05 19:38:08.898546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.567 [2024-12-05 19:38:08.898798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 BaseBdev1_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 [2024-12-05 19:38:09.402528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.160 [2024-12-05 19:38:09.402633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.160 [2024-12-05 19:38:09.402665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.160 [2024-12-05 19:38:09.402684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.160 [2024-12-05 19:38:09.406020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.160 [2024-12-05 19:38:09.406082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.160 BaseBdev1 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 BaseBdev2_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 [2024-12-05 19:38:09.459652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:16.160 [2024-12-05 19:38:09.459791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.160 [2024-12-05 19:38:09.459827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.160 [2024-12-05 19:38:09.459846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.160 [2024-12-05 19:38:09.462682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.160 [2024-12-05 19:38:09.462761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.160 BaseBdev2 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 BaseBdev3_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 [2024-12-05 19:38:09.520750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:16.160 [2024-12-05 19:38:09.520958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.160 [2024-12-05 19:38:09.521012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:16.160 [2024-12-05 19:38:09.521041] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.160 [2024-12-05 19:38:09.524028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.160 [2024-12-05 19:38:09.524226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:16.160 BaseBdev3 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 BaseBdev4_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.160 [2024-12-05 19:38:09.574105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:16.160 [2024-12-05 19:38:09.574307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.160 [2024-12-05 19:38:09.574348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:16.160 [2024-12-05 19:38:09.574367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.160 [2024-12-05 19:38:09.577148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.160 [2024-12-05 19:38:09.577201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:16.160 BaseBdev4 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.160 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.419 spare_malloc 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.419 spare_delay 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.419 [2024-12-05 19:38:09.634276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.419 [2024-12-05 19:38:09.634366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.419 [2024-12-05 19:38:09.634393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:16.419 [2024-12-05 19:38:09.634411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.419 [2024-12-05 19:38:09.637395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.419 [2024-12-05 19:38:09.637443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.419 spare 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.419 [2024-12-05 19:38:09.642421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.419 [2024-12-05 19:38:09.645030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.419 [2024-12-05 19:38:09.645133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.419 [2024-12-05 19:38:09.645212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:16.419 [2024-12-05 19:38:09.645342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.419 [2024-12-05 19:38:09.645379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.419 [2024-12-05 19:38:09.645693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:16.419 [2024-12-05 19:38:09.645974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.419 [2024-12-05 19:38:09.646007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.419 [2024-12-05 19:38:09.646192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.419 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.420 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.420 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.420 "name": "raid_bdev1", 00:18:16.420 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:16.420 "strip_size_kb": 0, 00:18:16.420 "state": "online", 00:18:16.420 "raid_level": "raid1", 00:18:16.420 "superblock": false, 00:18:16.420 "num_base_bdevs": 4, 00:18:16.420 "num_base_bdevs_discovered": 4, 00:18:16.420 "num_base_bdevs_operational": 4, 00:18:16.420 "base_bdevs_list": [ 00:18:16.420 { 00:18:16.420 "name": "BaseBdev1", 00:18:16.420 "uuid": "b04fb7c7-0414-5bdc-ad57-8a24a7b96987", 00:18:16.420 "is_configured": true, 00:18:16.420 "data_offset": 0, 00:18:16.420 "data_size": 65536 00:18:16.420 }, 00:18:16.420 { 00:18:16.420 "name": "BaseBdev2", 00:18:16.420 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:16.420 "is_configured": true, 00:18:16.420 "data_offset": 0, 00:18:16.420 "data_size": 65536 00:18:16.420 }, 00:18:16.420 { 00:18:16.420 "name": "BaseBdev3", 00:18:16.420 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:16.420 "is_configured": true, 00:18:16.420 "data_offset": 0, 00:18:16.420 "data_size": 65536 00:18:16.420 }, 00:18:16.420 { 00:18:16.420 "name": "BaseBdev4", 00:18:16.420 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:16.420 "is_configured": true, 00:18:16.420 "data_offset": 0, 00:18:16.420 "data_size": 65536 00:18:16.420 } 00:18:16.420 ] 00:18:16.420 }' 00:18:16.420 19:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.420 19:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.988 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.988 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:16.988 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.988 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.988 [2024-12-05 19:38:10.187044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.988 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.989 [2024-12-05 19:38:10.290572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.989 "name": "raid_bdev1", 00:18:16.989 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:16.989 "strip_size_kb": 0, 00:18:16.989 "state": "online", 00:18:16.989 "raid_level": "raid1", 00:18:16.989 "superblock": false, 00:18:16.989 "num_base_bdevs": 4, 00:18:16.989 "num_base_bdevs_discovered": 3, 00:18:16.989 "num_base_bdevs_operational": 3, 00:18:16.989 "base_bdevs_list": [ 00:18:16.989 { 00:18:16.989 "name": null, 00:18:16.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.989 "is_configured": false, 00:18:16.989 "data_offset": 0, 00:18:16.989 "data_size": 65536 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": "BaseBdev2", 00:18:16.989 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 0, 00:18:16.989 "data_size": 65536 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": "BaseBdev3", 00:18:16.989 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 0, 00:18:16.989 "data_size": 65536 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": "BaseBdev4", 00:18:16.989 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 0, 00:18:16.989 "data_size": 65536 00:18:16.989 } 00:18:16.989 ] 00:18:16.989 }' 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.989 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.989 [2024-12-05 19:38:10.414947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:16.989 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:16.989 Zero copy mechanism will not be used. 00:18:16.989 Running I/O for 60 seconds... 00:18:17.557 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.557 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.557 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.557 [2024-12-05 19:38:10.826434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.557 19:38:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.557 19:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:17.557 [2024-12-05 19:38:10.900909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:17.557 [2024-12-05 19:38:10.903578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.815 [2024-12-05 19:38:11.024004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:17.815 [2024-12-05 19:38:11.025055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:17.815 [2024-12-05 19:38:11.240020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:17.815 [2024-12-05 19:38:11.240563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:18.333 156.00 IOPS, 468.00 MiB/s [2024-12-05T19:38:11.775Z] [2024-12-05 19:38:11.588990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:18.334 [2024-12-05 19:38:11.589583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:18.591 [2024-12-05 19:38:11.794038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:18.591 [2024-12-05 19:38:11.794838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.591 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.591 "name": "raid_bdev1", 00:18:18.591 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:18.591 "strip_size_kb": 0, 00:18:18.591 "state": "online", 00:18:18.591 "raid_level": "raid1", 00:18:18.591 "superblock": false, 00:18:18.591 "num_base_bdevs": 4, 00:18:18.591 "num_base_bdevs_discovered": 4, 00:18:18.591 "num_base_bdevs_operational": 4, 00:18:18.591 "process": { 00:18:18.591 "type": "rebuild", 00:18:18.591 "target": "spare", 00:18:18.591 "progress": { 00:18:18.591 "blocks": 10240, 00:18:18.591 "percent": 15 00:18:18.591 } 00:18:18.591 }, 00:18:18.591 "base_bdevs_list": [ 00:18:18.591 { 00:18:18.591 "name": "spare", 00:18:18.591 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:18.591 "is_configured": true, 00:18:18.591 "data_offset": 0, 00:18:18.591 "data_size": 65536 00:18:18.591 }, 00:18:18.591 { 00:18:18.591 "name": "BaseBdev2", 00:18:18.591 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:18.591 "is_configured": true, 00:18:18.591 "data_offset": 0, 00:18:18.591 "data_size": 65536 00:18:18.591 }, 00:18:18.591 { 00:18:18.591 "name": "BaseBdev3", 00:18:18.591 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:18.591 "is_configured": true, 00:18:18.592 "data_offset": 0, 00:18:18.592 "data_size": 65536 00:18:18.592 }, 00:18:18.592 { 00:18:18.592 "name": "BaseBdev4", 00:18:18.592 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:18.592 "is_configured": true, 00:18:18.592 "data_offset": 0, 00:18:18.592 "data_size": 65536 00:18:18.592 } 00:18:18.592 ] 00:18:18.592 }' 00:18:18.592 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.592 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.592 19:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.850 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.850 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:18.850 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.850 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.850 [2024-12-05 19:38:12.062928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.850 [2024-12-05 19:38:12.173310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:18.850 [2024-12-05 19:38:12.175050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:19.109 [2024-12-05 19:38:12.293951] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.109 [2024-12-05 19:38:12.319926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.109 [2024-12-05 19:38:12.319990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.109 [2024-12-05 19:38:12.320012] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.109 [2024-12-05 19:38:12.362906] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.109 118.50 IOPS, 355.50 MiB/s [2024-12-05T19:38:12.550Z] 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.109 "name": "raid_bdev1", 00:18:19.109 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:19.109 "strip_size_kb": 0, 00:18:19.109 "state": "online", 00:18:19.109 "raid_level": "raid1", 00:18:19.109 "superblock": false, 00:18:19.109 "num_base_bdevs": 4, 00:18:19.109 "num_base_bdevs_discovered": 3, 00:18:19.109 "num_base_bdevs_operational": 3, 00:18:19.109 "base_bdevs_list": [ 00:18:19.109 { 00:18:19.109 "name": null, 00:18:19.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.109 "is_configured": false, 00:18:19.109 "data_offset": 0, 00:18:19.109 "data_size": 65536 00:18:19.109 }, 00:18:19.109 { 00:18:19.109 "name": "BaseBdev2", 00:18:19.109 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:19.109 "is_configured": true, 00:18:19.109 "data_offset": 0, 00:18:19.109 "data_size": 65536 00:18:19.109 }, 00:18:19.109 { 00:18:19.109 "name": "BaseBdev3", 00:18:19.109 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:19.109 "is_configured": true, 00:18:19.109 "data_offset": 0, 00:18:19.109 "data_size": 65536 00:18:19.109 }, 00:18:19.109 { 00:18:19.109 "name": "BaseBdev4", 00:18:19.109 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:19.109 "is_configured": true, 00:18:19.109 "data_offset": 0, 00:18:19.109 "data_size": 65536 00:18:19.109 } 00:18:19.109 ] 00:18:19.109 }' 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.109 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.676 "name": "raid_bdev1", 00:18:19.676 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:19.676 "strip_size_kb": 0, 00:18:19.676 "state": "online", 00:18:19.676 "raid_level": "raid1", 00:18:19.676 "superblock": false, 00:18:19.676 "num_base_bdevs": 4, 00:18:19.676 "num_base_bdevs_discovered": 3, 00:18:19.676 "num_base_bdevs_operational": 3, 00:18:19.676 "base_bdevs_list": [ 00:18:19.676 { 00:18:19.676 "name": null, 00:18:19.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.676 "is_configured": false, 00:18:19.676 "data_offset": 0, 00:18:19.676 "data_size": 65536 00:18:19.676 }, 00:18:19.676 { 00:18:19.676 "name": "BaseBdev2", 00:18:19.676 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:19.676 "is_configured": true, 00:18:19.676 "data_offset": 0, 00:18:19.676 "data_size": 65536 00:18:19.676 }, 00:18:19.676 { 00:18:19.676 "name": "BaseBdev3", 00:18:19.676 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:19.676 "is_configured": true, 00:18:19.676 "data_offset": 0, 00:18:19.676 "data_size": 65536 00:18:19.676 }, 00:18:19.676 { 00:18:19.676 "name": "BaseBdev4", 00:18:19.676 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:19.676 "is_configured": true, 00:18:19.676 "data_offset": 0, 00:18:19.676 "data_size": 65536 00:18:19.676 } 00:18:19.676 ] 00:18:19.676 }' 00:18:19.676 19:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.676 19:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.676 [2024-12-05 19:38:13.079187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.934 19:38:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.934 19:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:19.934 [2024-12-05 19:38:13.155924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:19.934 [2024-12-05 19:38:13.159046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.934 [2024-12-05 19:38:13.281650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:19.934 [2024-12-05 19:38:13.282693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:20.193 143.00 IOPS, 429.00 MiB/s [2024-12-05T19:38:13.634Z] [2024-12-05 19:38:13.553030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:20.759 [2024-12-05 19:38:14.035643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.759 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.759 "name": "raid_bdev1", 00:18:20.759 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:20.759 "strip_size_kb": 0, 00:18:20.759 "state": "online", 00:18:20.759 "raid_level": "raid1", 00:18:20.759 "superblock": false, 00:18:20.759 "num_base_bdevs": 4, 00:18:20.759 "num_base_bdevs_discovered": 4, 00:18:20.759 "num_base_bdevs_operational": 4, 00:18:20.759 "process": { 00:18:20.759 "type": "rebuild", 00:18:20.759 "target": "spare", 00:18:20.759 "progress": { 00:18:20.759 "blocks": 10240, 00:18:20.759 "percent": 15 00:18:20.759 } 00:18:20.759 }, 00:18:20.759 "base_bdevs_list": [ 00:18:20.759 { 00:18:20.759 "name": "spare", 00:18:20.759 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:20.759 "is_configured": true, 00:18:20.759 "data_offset": 0, 00:18:20.759 "data_size": 65536 00:18:20.759 }, 00:18:20.759 { 00:18:20.759 "name": "BaseBdev2", 00:18:20.759 "uuid": "6d4541a9-250a-58a0-9965-6f6c8e6e5678", 00:18:20.759 "is_configured": true, 00:18:20.759 "data_offset": 0, 00:18:20.759 "data_size": 65536 00:18:20.759 }, 00:18:20.759 { 00:18:20.759 "name": "BaseBdev3", 00:18:20.759 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:20.759 "is_configured": true, 00:18:20.759 "data_offset": 0, 00:18:20.759 "data_size": 65536 00:18:20.759 }, 00:18:20.760 { 00:18:20.760 "name": "BaseBdev4", 00:18:20.760 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:20.760 "is_configured": true, 00:18:20.760 "data_offset": 0, 00:18:20.760 "data_size": 65536 00:18:20.760 } 00:18:20.760 ] 00:18:20.760 }' 00:18:20.760 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.018 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.018 [2024-12-05 19:38:14.309041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.018 [2024-12-05 19:38:14.399433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.018 [2024-12-05 19:38:14.401110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.284 126.50 IOPS, 379.50 MiB/s [2024-12-05T19:38:14.725Z] [2024-12-05 19:38:14.511814] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:21.285 [2024-12-05 19:38:14.511877] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.285 "name": "raid_bdev1", 00:18:21.285 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:21.285 "strip_size_kb": 0, 00:18:21.285 "state": "online", 00:18:21.285 "raid_level": "raid1", 00:18:21.285 "superblock": false, 00:18:21.285 "num_base_bdevs": 4, 00:18:21.285 "num_base_bdevs_discovered": 3, 00:18:21.285 "num_base_bdevs_operational": 3, 00:18:21.285 "process": { 00:18:21.285 "type": "rebuild", 00:18:21.285 "target": "spare", 00:18:21.285 "progress": { 00:18:21.285 "blocks": 14336, 00:18:21.285 "percent": 21 00:18:21.285 } 00:18:21.285 }, 00:18:21.285 "base_bdevs_list": [ 00:18:21.285 { 00:18:21.285 "name": "spare", 00:18:21.285 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:21.285 "is_configured": true, 00:18:21.285 "data_offset": 0, 00:18:21.285 "data_size": 65536 00:18:21.285 }, 00:18:21.285 { 00:18:21.285 "name": null, 00:18:21.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.285 "is_configured": false, 00:18:21.285 "data_offset": 0, 00:18:21.285 "data_size": 65536 00:18:21.285 }, 00:18:21.285 { 00:18:21.285 "name": "BaseBdev3", 00:18:21.285 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:21.285 "is_configured": true, 00:18:21.285 "data_offset": 0, 00:18:21.285 "data_size": 65536 00:18:21.285 }, 00:18:21.285 { 00:18:21.285 "name": "BaseBdev4", 00:18:21.285 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:21.285 "is_configured": true, 00:18:21.285 "data_offset": 0, 00:18:21.285 "data_size": 65536 00:18:21.285 } 00:18:21.285 ] 00:18:21.285 }' 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=528 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.285 19:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.581 "name": "raid_bdev1", 00:18:21.581 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:21.581 "strip_size_kb": 0, 00:18:21.581 "state": "online", 00:18:21.581 "raid_level": "raid1", 00:18:21.581 "superblock": false, 00:18:21.581 "num_base_bdevs": 4, 00:18:21.581 "num_base_bdevs_discovered": 3, 00:18:21.581 "num_base_bdevs_operational": 3, 00:18:21.581 "process": { 00:18:21.581 "type": "rebuild", 00:18:21.581 "target": "spare", 00:18:21.581 "progress": { 00:18:21.581 "blocks": 16384, 00:18:21.581 "percent": 25 00:18:21.581 } 00:18:21.581 }, 00:18:21.581 "base_bdevs_list": [ 00:18:21.581 { 00:18:21.581 "name": "spare", 00:18:21.581 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:21.581 "is_configured": true, 00:18:21.581 "data_offset": 0, 00:18:21.581 "data_size": 65536 00:18:21.581 }, 00:18:21.581 { 00:18:21.581 "name": null, 00:18:21.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.581 "is_configured": false, 00:18:21.581 "data_offset": 0, 00:18:21.581 "data_size": 65536 00:18:21.581 }, 00:18:21.581 { 00:18:21.581 "name": "BaseBdev3", 00:18:21.581 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:21.581 "is_configured": true, 00:18:21.581 "data_offset": 0, 00:18:21.581 "data_size": 65536 00:18:21.581 }, 00:18:21.581 { 00:18:21.581 "name": "BaseBdev4", 00:18:21.581 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:21.581 "is_configured": true, 00:18:21.581 "data_offset": 0, 00:18:21.581 "data_size": 65536 00:18:21.581 } 00:18:21.581 ] 00:18:21.581 }' 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.581 19:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.581 [2024-12-05 19:38:14.907081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:21.842 [2024-12-05 19:38:15.121883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:21.842 [2024-12-05 19:38:15.122303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:22.099 109.00 IOPS, 327.00 MiB/s [2024-12-05T19:38:15.540Z] [2024-12-05 19:38:15.470163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.666 "name": "raid_bdev1", 00:18:22.666 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:22.666 "strip_size_kb": 0, 00:18:22.666 "state": "online", 00:18:22.666 "raid_level": "raid1", 00:18:22.666 "superblock": false, 00:18:22.666 "num_base_bdevs": 4, 00:18:22.666 "num_base_bdevs_discovered": 3, 00:18:22.666 "num_base_bdevs_operational": 3, 00:18:22.666 "process": { 00:18:22.666 "type": "rebuild", 00:18:22.666 "target": "spare", 00:18:22.666 "progress": { 00:18:22.666 "blocks": 32768, 00:18:22.666 "percent": 50 00:18:22.666 } 00:18:22.666 }, 00:18:22.666 "base_bdevs_list": [ 00:18:22.666 { 00:18:22.666 "name": "spare", 00:18:22.666 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:22.666 "is_configured": true, 00:18:22.666 "data_offset": 0, 00:18:22.666 "data_size": 65536 00:18:22.666 }, 00:18:22.666 { 00:18:22.666 "name": null, 00:18:22.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.666 "is_configured": false, 00:18:22.666 "data_offset": 0, 00:18:22.666 "data_size": 65536 00:18:22.666 }, 00:18:22.666 { 00:18:22.666 "name": "BaseBdev3", 00:18:22.666 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:22.666 "is_configured": true, 00:18:22.666 "data_offset": 0, 00:18:22.666 "data_size": 65536 00:18:22.666 }, 00:18:22.666 { 00:18:22.666 "name": "BaseBdev4", 00:18:22.666 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:22.666 "is_configured": true, 00:18:22.666 "data_offset": 0, 00:18:22.666 "data_size": 65536 00:18:22.666 } 00:18:22.666 ] 00:18:22.666 }' 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.666 19:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.666 19:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.666 19:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.925 [2024-12-05 19:38:16.157078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:23.183 100.50 IOPS, 301.50 MiB/s [2024-12-05T19:38:16.624Z] [2024-12-05 19:38:16.609026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.751 "name": "raid_bdev1", 00:18:23.751 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:23.751 "strip_size_kb": 0, 00:18:23.751 "state": "online", 00:18:23.751 "raid_level": "raid1", 00:18:23.751 "superblock": false, 00:18:23.751 "num_base_bdevs": 4, 00:18:23.751 "num_base_bdevs_discovered": 3, 00:18:23.751 "num_base_bdevs_operational": 3, 00:18:23.751 "process": { 00:18:23.751 "type": "rebuild", 00:18:23.751 "target": "spare", 00:18:23.751 "progress": { 00:18:23.751 "blocks": 53248, 00:18:23.751 "percent": 81 00:18:23.751 } 00:18:23.751 }, 00:18:23.751 "base_bdevs_list": [ 00:18:23.751 { 00:18:23.751 "name": "spare", 00:18:23.751 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:23.751 "is_configured": true, 00:18:23.751 "data_offset": 0, 00:18:23.751 "data_size": 65536 00:18:23.751 }, 00:18:23.751 { 00:18:23.751 "name": null, 00:18:23.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.751 "is_configured": false, 00:18:23.751 "data_offset": 0, 00:18:23.751 "data_size": 65536 00:18:23.751 }, 00:18:23.751 { 00:18:23.751 "name": "BaseBdev3", 00:18:23.751 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:23.751 "is_configured": true, 00:18:23.751 "data_offset": 0, 00:18:23.751 "data_size": 65536 00:18:23.751 }, 00:18:23.751 { 00:18:23.751 "name": "BaseBdev4", 00:18:23.751 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:23.751 "is_configured": true, 00:18:23.751 "data_offset": 0, 00:18:23.751 "data_size": 65536 00:18:23.751 } 00:18:23.751 ] 00:18:23.751 }' 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.751 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.752 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.752 19:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.011 [2024-12-05 19:38:17.200297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:24.011 [2024-12-05 19:38:17.302923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:24.269 90.86 IOPS, 272.57 MiB/s [2024-12-05T19:38:17.710Z] [2024-12-05 19:38:17.646526] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:24.529 [2024-12-05 19:38:17.753413] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:24.529 [2024-12-05 19:38:17.756622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.787 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.047 "name": "raid_bdev1", 00:18:25.047 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:25.047 "strip_size_kb": 0, 00:18:25.047 "state": "online", 00:18:25.047 "raid_level": "raid1", 00:18:25.047 "superblock": false, 00:18:25.047 "num_base_bdevs": 4, 00:18:25.047 "num_base_bdevs_discovered": 3, 00:18:25.047 "num_base_bdevs_operational": 3, 00:18:25.047 "base_bdevs_list": [ 00:18:25.047 { 00:18:25.047 "name": "spare", 00:18:25.047 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": null, 00:18:25.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.047 "is_configured": false, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": "BaseBdev3", 00:18:25.047 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": "BaseBdev4", 00:18:25.047 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 } 00:18:25.047 ] 00:18:25.047 }' 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.047 "name": "raid_bdev1", 00:18:25.047 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:25.047 "strip_size_kb": 0, 00:18:25.047 "state": "online", 00:18:25.047 "raid_level": "raid1", 00:18:25.047 "superblock": false, 00:18:25.047 "num_base_bdevs": 4, 00:18:25.047 "num_base_bdevs_discovered": 3, 00:18:25.047 "num_base_bdevs_operational": 3, 00:18:25.047 "base_bdevs_list": [ 00:18:25.047 { 00:18:25.047 "name": "spare", 00:18:25.047 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": null, 00:18:25.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.047 "is_configured": false, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": "BaseBdev3", 00:18:25.047 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 }, 00:18:25.047 { 00:18:25.047 "name": "BaseBdev4", 00:18:25.047 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:25.047 "is_configured": true, 00:18:25.047 "data_offset": 0, 00:18:25.047 "data_size": 65536 00:18:25.047 } 00:18:25.047 ] 00:18:25.047 }' 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.047 83.12 IOPS, 249.38 MiB/s [2024-12-05T19:38:18.488Z] 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.047 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.307 "name": "raid_bdev1", 00:18:25.307 "uuid": "4402a3c7-cac4-4f32-8b1b-a5ba2dd0a571", 00:18:25.307 "strip_size_kb": 0, 00:18:25.307 "state": "online", 00:18:25.307 "raid_level": "raid1", 00:18:25.307 "superblock": false, 00:18:25.307 "num_base_bdevs": 4, 00:18:25.307 "num_base_bdevs_discovered": 3, 00:18:25.307 "num_base_bdevs_operational": 3, 00:18:25.307 "base_bdevs_list": [ 00:18:25.307 { 00:18:25.307 "name": "spare", 00:18:25.307 "uuid": "6b2d724b-c907-5f90-9095-4dd8ba799f92", 00:18:25.307 "is_configured": true, 00:18:25.307 "data_offset": 0, 00:18:25.307 "data_size": 65536 00:18:25.307 }, 00:18:25.307 { 00:18:25.307 "name": null, 00:18:25.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.307 "is_configured": false, 00:18:25.307 "data_offset": 0, 00:18:25.307 "data_size": 65536 00:18:25.307 }, 00:18:25.307 { 00:18:25.307 "name": "BaseBdev3", 00:18:25.307 "uuid": "72b6ce70-8d3b-5d27-a0db-347d734838b6", 00:18:25.307 "is_configured": true, 00:18:25.307 "data_offset": 0, 00:18:25.307 "data_size": 65536 00:18:25.307 }, 00:18:25.307 { 00:18:25.307 "name": "BaseBdev4", 00:18:25.307 "uuid": "71e5a5e3-f519-5465-b78b-45c71dbd45da", 00:18:25.307 "is_configured": true, 00:18:25.307 "data_offset": 0, 00:18:25.307 "data_size": 65536 00:18:25.307 } 00:18:25.307 ] 00:18:25.307 }' 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.307 19:38:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.876 [2024-12-05 19:38:19.085340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.876 [2024-12-05 19:38:19.085601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.876 00:18:25.876 Latency(us) 00:18:25.876 [2024-12-05T19:38:19.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.876 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:25.876 raid_bdev1 : 8.77 78.36 235.07 0.00 0.00 16810.40 288.58 123922.62 00:18:25.876 [2024-12-05T19:38:19.317Z] =================================================================================================================== 00:18:25.876 [2024-12-05T19:38:19.317Z] Total : 78.36 235.07 0.00 0.00 16810.40 288.58 123922.62 00:18:25.876 [2024-12-05 19:38:19.205577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.876 [2024-12-05 19:38:19.205916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.876 [2024-12-05 19:38:19.206098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:18:25.876 "results": [ 00:18:25.876 { 00:18:25.876 "job": "raid_bdev1", 00:18:25.876 "core_mask": "0x1", 00:18:25.876 "workload": "randrw", 00:18:25.876 "percentage": 50, 00:18:25.876 "status": "finished", 00:18:25.876 "queue_depth": 2, 00:18:25.876 "io_size": 3145728, 00:18:25.876 "runtime": 8.767641, 00:18:25.876 "iops": 78.35631043743693, 00:18:25.876 "mibps": 235.06893131231078, 00:18:25.876 "io_failed": 0, 00:18:25.876 "io_timeout": 0, 00:18:25.876 "avg_latency_us": 16810.39701468837, 00:18:25.876 "min_latency_us": 288.58181818181816, 00:18:25.876 "max_latency_us": 123922.61818181818 00:18:25.876 } 00:18:25.876 ], 00:18:25.876 "core_count": 1 00:18:25.876 } 00:18:25.876 ee all in destruct 00:18:25.876 [2024-12-05 19:38:19.206315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:25.876 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:26.446 /dev/nbd0 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.446 1+0 records in 00:18:26.446 1+0 records out 00:18:26.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366924 s, 11.2 MB/s 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.446 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:26.446 /dev/nbd1 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.746 1+0 records in 00:18:26.746 1+0 records out 00:18:26.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479574 s, 8.5 MB/s 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.746 19:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.746 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.011 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:27.579 /dev/nbd1 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.579 1+0 records in 00:18:27.579 1+0 records out 00:18:27.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385261 s, 10.6 MB/s 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.579 19:38:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.580 19:38:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.839 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79076 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79076 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076 00:18:28.098 killing process with pid 79076 00:18:28.098 Received shutdown signal, test time was about 11.096111 seconds 00:18:28.098 00:18:28.098 Latency(us) 00:18:28.098 [2024-12-05T19:38:21.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.098 [2024-12-05T19:38:21.539Z] =================================================================================================================== 00:18:28.098 [2024-12-05T19:38:21.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076' 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79076 00:18:28.098 [2024-12-05 19:38:21.513775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.098 19:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79076 00:18:28.666 [2024-12-05 19:38:21.906658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.604 19:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.604 00:18:29.604 real 0m14.761s 00:18:29.604 user 0m19.481s 00:18:29.604 sys 0m1.786s 00:18:29.604 19:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.604 ************************************ 00:18:29.604 END TEST raid_rebuild_test_io 00:18:29.604 ************************************ 00:18:29.604 19:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 19:38:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:18:29.864 19:38:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:29.864 19:38:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.864 19:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 ************************************ 00:18:29.864 START TEST raid_rebuild_test_sb_io 00:18:29.864 ************************************ 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79504 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79504 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79504 ']' 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.864 19:38:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.864 [2024-12-05 19:38:23.191259] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:18:29.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.864 Zero copy mechanism will not be used. 00:18:29.864 [2024-12-05 19:38:23.191737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79504 ] 00:18:30.124 [2024-12-05 19:38:23.383867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.124 [2024-12-05 19:38:23.547029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.383 [2024-12-05 19:38:23.751615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.383 [2024-12-05 19:38:23.751671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.953 BaseBdev1_malloc 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.953 [2024-12-05 19:38:24.267543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.953 [2024-12-05 19:38:24.267627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.953 [2024-12-05 19:38:24.267657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.953 [2024-12-05 19:38:24.267674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.953 [2024-12-05 19:38:24.270502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.953 [2024-12-05 19:38:24.270548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.953 BaseBdev1 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.953 BaseBdev2_malloc 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:30.953 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.954 [2024-12-05 19:38:24.323134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:30.954 [2024-12-05 19:38:24.323222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.954 [2024-12-05 19:38:24.323259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.954 [2024-12-05 19:38:24.323277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.954 [2024-12-05 19:38:24.326153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.954 [2024-12-05 19:38:24.326213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.954 BaseBdev2 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.954 BaseBdev3_malloc 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.954 [2024-12-05 19:38:24.383697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:30.954 [2024-12-05 19:38:24.383817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.954 [2024-12-05 19:38:24.383850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:30.954 [2024-12-05 19:38:24.383870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.954 [2024-12-05 19:38:24.386676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.954 [2024-12-05 19:38:24.386765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:30.954 BaseBdev3 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.954 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.213 BaseBdev4_malloc 00:18:31.213 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.213 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:31.213 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.213 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.213 [2024-12-05 19:38:24.440622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:31.213 [2024-12-05 19:38:24.440762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.213 [2024-12-05 19:38:24.440807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:31.213 [2024-12-05 19:38:24.440825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.213 [2024-12-05 19:38:24.443479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.213 [2024-12-05 19:38:24.443747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:31.213 BaseBdev4 00:18:31.213 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 spare_malloc 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 spare_delay 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 [2024-12-05 19:38:24.500261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.214 [2024-12-05 19:38:24.500501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.214 [2024-12-05 19:38:24.500538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:31.214 [2024-12-05 19:38:24.500557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.214 [2024-12-05 19:38:24.503316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.214 [2024-12-05 19:38:24.503364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.214 spare 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 [2024-12-05 19:38:24.512343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.214 [2024-12-05 19:38:24.514785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.214 [2024-12-05 19:38:24.514867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.214 [2024-12-05 19:38:24.514943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.214 [2024-12-05 19:38:24.515161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.214 [2024-12-05 19:38:24.515184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.214 [2024-12-05 19:38:24.515463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.214 [2024-12-05 19:38:24.515692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.214 [2024-12-05 19:38:24.515771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.214 [2024-12-05 19:38:24.515957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.214 "name": "raid_bdev1", 00:18:31.214 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:31.214 "strip_size_kb": 0, 00:18:31.214 "state": "online", 00:18:31.214 "raid_level": "raid1", 00:18:31.214 "superblock": true, 00:18:31.214 "num_base_bdevs": 4, 00:18:31.214 "num_base_bdevs_discovered": 4, 00:18:31.214 "num_base_bdevs_operational": 4, 00:18:31.214 "base_bdevs_list": [ 00:18:31.214 { 00:18:31.214 "name": "BaseBdev1", 00:18:31.214 "uuid": "d399e6b2-7a0a-5327-9892-7aa0c7749158", 00:18:31.214 "is_configured": true, 00:18:31.214 "data_offset": 2048, 00:18:31.214 "data_size": 63488 00:18:31.214 }, 00:18:31.214 { 00:18:31.214 "name": "BaseBdev2", 00:18:31.214 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:31.214 "is_configured": true, 00:18:31.214 "data_offset": 2048, 00:18:31.214 "data_size": 63488 00:18:31.214 }, 00:18:31.214 { 00:18:31.214 "name": "BaseBdev3", 00:18:31.214 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:31.214 "is_configured": true, 00:18:31.214 "data_offset": 2048, 00:18:31.214 "data_size": 63488 00:18:31.214 }, 00:18:31.214 { 00:18:31.214 "name": "BaseBdev4", 00:18:31.214 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:31.214 "is_configured": true, 00:18:31.214 "data_offset": 2048, 00:18:31.214 "data_size": 63488 00:18:31.214 } 00:18:31.214 ] 00:18:31.214 }' 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.214 19:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 [2024-12-05 19:38:25.040974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 [2024-12-05 19:38:25.144459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.818 "name": "raid_bdev1", 00:18:31.818 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:31.818 "strip_size_kb": 0, 00:18:31.818 "state": "online", 00:18:31.818 "raid_level": "raid1", 00:18:31.818 "superblock": true, 00:18:31.818 "num_base_bdevs": 4, 00:18:31.818 "num_base_bdevs_discovered": 3, 00:18:31.818 "num_base_bdevs_operational": 3, 00:18:31.818 "base_bdevs_list": [ 00:18:31.818 { 00:18:31.818 "name": null, 00:18:31.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.818 "is_configured": false, 00:18:31.818 "data_offset": 0, 00:18:31.818 "data_size": 63488 00:18:31.818 }, 00:18:31.818 { 00:18:31.818 "name": "BaseBdev2", 00:18:31.818 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:31.818 "is_configured": true, 00:18:31.818 "data_offset": 2048, 00:18:31.818 "data_size": 63488 00:18:31.818 }, 00:18:31.818 { 00:18:31.818 "name": "BaseBdev3", 00:18:31.818 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:31.818 "is_configured": true, 00:18:31.818 "data_offset": 2048, 00:18:31.818 "data_size": 63488 00:18:31.818 }, 00:18:31.818 { 00:18:31.818 "name": "BaseBdev4", 00:18:31.818 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:31.818 "is_configured": true, 00:18:31.818 "data_offset": 2048, 00:18:31.818 "data_size": 63488 00:18:31.818 } 00:18:31.818 ] 00:18:31.818 }' 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.818 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.076 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:32.076 Zero copy mechanism will not be used. 00:18:32.076 Running I/O for 60 seconds... 00:18:32.076 [2024-12-05 19:38:25.308863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:32.335 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.335 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.335 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.335 [2024-12-05 19:38:25.703212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.335 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.335 19:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:32.335 [2024-12-05 19:38:25.772158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:32.335 [2024-12-05 19:38:25.774851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.594 [2024-12-05 19:38:25.877681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.594 [2024-12-05 19:38:25.879314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.853 [2024-12-05 19:38:26.100350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.853 [2024-12-05 19:38:26.101005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:33.371 132.00 IOPS, 396.00 MiB/s [2024-12-05T19:38:26.812Z] [2024-12-05 19:38:26.598260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.371 [2024-12-05 19:38:26.599220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.371 "name": "raid_bdev1", 00:18:33.371 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:33.371 "strip_size_kb": 0, 00:18:33.371 "state": "online", 00:18:33.371 "raid_level": "raid1", 00:18:33.371 "superblock": true, 00:18:33.371 "num_base_bdevs": 4, 00:18:33.371 "num_base_bdevs_discovered": 4, 00:18:33.371 "num_base_bdevs_operational": 4, 00:18:33.371 "process": { 00:18:33.371 "type": "rebuild", 00:18:33.371 "target": "spare", 00:18:33.371 "progress": { 00:18:33.371 "blocks": 10240, 00:18:33.371 "percent": 16 00:18:33.371 } 00:18:33.371 }, 00:18:33.371 "base_bdevs_list": [ 00:18:33.371 { 00:18:33.371 "name": "spare", 00:18:33.371 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:33.371 "is_configured": true, 00:18:33.371 "data_offset": 2048, 00:18:33.371 "data_size": 63488 00:18:33.371 }, 00:18:33.371 { 00:18:33.371 "name": "BaseBdev2", 00:18:33.371 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:33.371 "is_configured": true, 00:18:33.371 "data_offset": 2048, 00:18:33.371 "data_size": 63488 00:18:33.371 }, 00:18:33.371 { 00:18:33.371 "name": "BaseBdev3", 00:18:33.371 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:33.371 "is_configured": true, 00:18:33.371 "data_offset": 2048, 00:18:33.371 "data_size": 63488 00:18:33.371 }, 00:18:33.371 { 00:18:33.371 "name": "BaseBdev4", 00:18:33.371 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:33.371 "is_configured": true, 00:18:33.371 "data_offset": 2048, 00:18:33.371 "data_size": 63488 00:18:33.371 } 00:18:33.371 ] 00:18:33.371 }' 00:18:33.371 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.629 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.629 [2024-12-05 19:38:26.929775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.629 [2024-12-05 19:38:26.930196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:33.630 [2024-12-05 19:38:26.939566] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.630 [2024-12-05 19:38:26.952119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.630 [2024-12-05 19:38:26.952187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.630 [2024-12-05 19:38:26.952209] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.630 [2024-12-05 19:38:26.996198] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:33.630 19:38:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.630 "name": "raid_bdev1", 00:18:33.630 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:33.630 "strip_size_kb": 0, 00:18:33.630 "state": "online", 00:18:33.630 "raid_level": "raid1", 00:18:33.630 "superblock": true, 00:18:33.630 "num_base_bdevs": 4, 00:18:33.630 "num_base_bdevs_discovered": 3, 00:18:33.630 "num_base_bdevs_operational": 3, 00:18:33.630 "base_bdevs_list": [ 00:18:33.630 { 00:18:33.630 "name": null, 00:18:33.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.630 "is_configured": false, 00:18:33.630 "data_offset": 0, 00:18:33.630 "data_size": 63488 00:18:33.630 }, 00:18:33.630 { 00:18:33.630 "name": "BaseBdev2", 00:18:33.630 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:33.630 "is_configured": true, 00:18:33.630 "data_offset": 2048, 00:18:33.630 "data_size": 63488 00:18:33.630 }, 00:18:33.630 { 00:18:33.630 "name": "BaseBdev3", 00:18:33.630 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:33.630 "is_configured": true, 00:18:33.630 "data_offset": 2048, 00:18:33.630 "data_size": 63488 00:18:33.630 }, 00:18:33.630 { 00:18:33.630 "name": "BaseBdev4", 00:18:33.630 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:33.630 "is_configured": true, 00:18:33.630 "data_offset": 2048, 00:18:33.630 "data_size": 63488 00:18:33.630 } 00:18:33.630 ] 00:18:33.630 }' 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.630 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.196 124.50 IOPS, 373.50 MiB/s [2024-12-05T19:38:27.637Z] 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.196 "name": "raid_bdev1", 00:18:34.196 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:34.196 "strip_size_kb": 0, 00:18:34.196 "state": "online", 00:18:34.196 "raid_level": "raid1", 00:18:34.196 "superblock": true, 00:18:34.196 "num_base_bdevs": 4, 00:18:34.196 "num_base_bdevs_discovered": 3, 00:18:34.196 "num_base_bdevs_operational": 3, 00:18:34.196 "base_bdevs_list": [ 00:18:34.196 { 00:18:34.196 "name": null, 00:18:34.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.196 "is_configured": false, 00:18:34.196 "data_offset": 0, 00:18:34.196 "data_size": 63488 00:18:34.196 }, 00:18:34.196 { 00:18:34.196 "name": "BaseBdev2", 00:18:34.196 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:34.196 "is_configured": true, 00:18:34.196 "data_offset": 2048, 00:18:34.196 "data_size": 63488 00:18:34.196 }, 00:18:34.196 { 00:18:34.196 "name": "BaseBdev3", 00:18:34.196 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:34.196 "is_configured": true, 00:18:34.196 "data_offset": 2048, 00:18:34.196 "data_size": 63488 00:18:34.196 }, 00:18:34.196 { 00:18:34.196 "name": "BaseBdev4", 00:18:34.196 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:34.196 "is_configured": true, 00:18:34.196 "data_offset": 2048, 00:18:34.196 "data_size": 63488 00:18:34.196 } 00:18:34.196 ] 00:18:34.196 }' 00:18:34.196 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.455 [2024-12-05 19:38:27.700497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.455 19:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:34.455 [2024-12-05 19:38:27.759576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:34.455 [2024-12-05 19:38:27.762222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.455 [2024-12-05 19:38:27.885532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:34.714 [2024-12-05 19:38:28.106176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:34.714 [2024-12-05 19:38:28.106455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.232 133.00 IOPS, 399.00 MiB/s [2024-12-05T19:38:28.673Z] [2024-12-05 19:38:28.496938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.491 "name": "raid_bdev1", 00:18:35.491 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:35.491 "strip_size_kb": 0, 00:18:35.491 "state": "online", 00:18:35.491 "raid_level": "raid1", 00:18:35.491 "superblock": true, 00:18:35.491 "num_base_bdevs": 4, 00:18:35.491 "num_base_bdevs_discovered": 4, 00:18:35.491 "num_base_bdevs_operational": 4, 00:18:35.491 "process": { 00:18:35.491 "type": "rebuild", 00:18:35.491 "target": "spare", 00:18:35.491 "progress": { 00:18:35.491 "blocks": 12288, 00:18:35.491 "percent": 19 00:18:35.491 } 00:18:35.491 }, 00:18:35.491 "base_bdevs_list": [ 00:18:35.491 { 00:18:35.491 "name": "spare", 00:18:35.491 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:35.491 "is_configured": true, 00:18:35.491 "data_offset": 2048, 00:18:35.491 "data_size": 63488 00:18:35.491 }, 00:18:35.491 { 00:18:35.491 "name": "BaseBdev2", 00:18:35.491 "uuid": "ca2916e7-6c54-50f6-84a0-b5b07d5743a3", 00:18:35.491 "is_configured": true, 00:18:35.491 "data_offset": 2048, 00:18:35.491 "data_size": 63488 00:18:35.491 }, 00:18:35.491 { 00:18:35.491 "name": "BaseBdev3", 00:18:35.491 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:35.491 "is_configured": true, 00:18:35.491 "data_offset": 2048, 00:18:35.491 "data_size": 63488 00:18:35.491 }, 00:18:35.491 { 00:18:35.491 "name": "BaseBdev4", 00:18:35.491 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:35.491 "is_configured": true, 00:18:35.491 "data_offset": 2048, 00:18:35.491 "data_size": 63488 00:18:35.491 } 00:18:35.491 ] 00:18:35.491 }' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.491 [2024-12-05 19:38:28.826777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:35.491 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.491 19:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.491 [2024-12-05 19:38:28.899055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.751 [2024-12-05 19:38:29.087639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:36.010 [2024-12-05 19:38:29.299193] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:36.010 [2024-12-05 19:38:29.299573] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.010 122.00 IOPS, 366.00 MiB/s [2024-12-05T19:38:29.451Z] 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.010 "name": "raid_bdev1", 00:18:36.010 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:36.010 "strip_size_kb": 0, 00:18:36.010 "state": "online", 00:18:36.010 "raid_level": "raid1", 00:18:36.010 "superblock": true, 00:18:36.010 "num_base_bdevs": 4, 00:18:36.010 "num_base_bdevs_discovered": 3, 00:18:36.010 "num_base_bdevs_operational": 3, 00:18:36.010 "process": { 00:18:36.010 "type": "rebuild", 00:18:36.010 "target": "spare", 00:18:36.010 "progress": { 00:18:36.010 "blocks": 16384, 00:18:36.010 "percent": 25 00:18:36.010 } 00:18:36.010 }, 00:18:36.010 "base_bdevs_list": [ 00:18:36.010 { 00:18:36.010 "name": "spare", 00:18:36.010 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:36.010 "is_configured": true, 00:18:36.010 "data_offset": 2048, 00:18:36.010 "data_size": 63488 00:18:36.010 }, 00:18:36.010 { 00:18:36.010 "name": null, 00:18:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.010 "is_configured": false, 00:18:36.010 "data_offset": 0, 00:18:36.010 "data_size": 63488 00:18:36.010 }, 00:18:36.010 { 00:18:36.010 "name": "BaseBdev3", 00:18:36.010 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:36.010 "is_configured": true, 00:18:36.010 "data_offset": 2048, 00:18:36.010 "data_size": 63488 00:18:36.010 }, 00:18:36.010 { 00:18:36.010 "name": "BaseBdev4", 00:18:36.010 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:36.010 "is_configured": true, 00:18:36.010 "data_offset": 2048, 00:18:36.010 "data_size": 63488 00:18:36.010 } 00:18:36.010 ] 00:18:36.010 }' 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.010 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.270 "name": "raid_bdev1", 00:18:36.270 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:36.270 "strip_size_kb": 0, 00:18:36.270 "state": "online", 00:18:36.270 "raid_level": "raid1", 00:18:36.270 "superblock": true, 00:18:36.270 "num_base_bdevs": 4, 00:18:36.270 "num_base_bdevs_discovered": 3, 00:18:36.270 "num_base_bdevs_operational": 3, 00:18:36.270 "process": { 00:18:36.270 "type": "rebuild", 00:18:36.270 "target": "spare", 00:18:36.270 "progress": { 00:18:36.270 "blocks": 18432, 00:18:36.270 "percent": 29 00:18:36.270 } 00:18:36.270 }, 00:18:36.270 "base_bdevs_list": [ 00:18:36.270 { 00:18:36.270 "name": "spare", 00:18:36.270 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:36.270 "is_configured": true, 00:18:36.270 "data_offset": 2048, 00:18:36.270 "data_size": 63488 00:18:36.270 }, 00:18:36.270 { 00:18:36.270 "name": null, 00:18:36.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.270 "is_configured": false, 00:18:36.270 "data_offset": 0, 00:18:36.270 "data_size": 63488 00:18:36.270 }, 00:18:36.270 { 00:18:36.270 "name": "BaseBdev3", 00:18:36.270 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:36.270 "is_configured": true, 00:18:36.270 "data_offset": 2048, 00:18:36.270 "data_size": 63488 00:18:36.270 }, 00:18:36.270 { 00:18:36.270 "name": "BaseBdev4", 00:18:36.270 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:36.270 "is_configured": true, 00:18:36.270 "data_offset": 2048, 00:18:36.270 "data_size": 63488 00:18:36.270 } 00:18:36.270 ] 00:18:36.270 }' 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.270 [2024-12-05 19:38:29.546174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.270 19:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.270 [2024-12-05 19:38:29.685432] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:36.857 [2024-12-05 19:38:30.171201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:37.160 108.40 IOPS, 325.20 MiB/s [2024-12-05T19:38:30.601Z] [2024-12-05 19:38:30.403674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.419 "name": "raid_bdev1", 00:18:37.419 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:37.419 "strip_size_kb": 0, 00:18:37.419 "state": "online", 00:18:37.419 "raid_level": "raid1", 00:18:37.419 "superblock": true, 00:18:37.419 "num_base_bdevs": 4, 00:18:37.419 "num_base_bdevs_discovered": 3, 00:18:37.419 "num_base_bdevs_operational": 3, 00:18:37.419 "process": { 00:18:37.419 "type": "rebuild", 00:18:37.419 "target": "spare", 00:18:37.419 "progress": { 00:18:37.419 "blocks": 36864, 00:18:37.419 "percent": 58 00:18:37.419 } 00:18:37.419 }, 00:18:37.419 "base_bdevs_list": [ 00:18:37.419 { 00:18:37.419 "name": "spare", 00:18:37.419 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:37.419 "is_configured": true, 00:18:37.419 "data_offset": 2048, 00:18:37.419 "data_size": 63488 00:18:37.419 }, 00:18:37.419 { 00:18:37.419 "name": null, 00:18:37.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.419 "is_configured": false, 00:18:37.419 "data_offset": 0, 00:18:37.419 "data_size": 63488 00:18:37.419 }, 00:18:37.419 { 00:18:37.419 "name": "BaseBdev3", 00:18:37.419 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:37.419 "is_configured": true, 00:18:37.419 "data_offset": 2048, 00:18:37.419 "data_size": 63488 00:18:37.419 }, 00:18:37.419 { 00:18:37.419 "name": "BaseBdev4", 00:18:37.419 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:37.419 "is_configured": true, 00:18:37.419 "data_offset": 2048, 00:18:37.419 "data_size": 63488 00:18:37.419 } 00:18:37.419 ] 00:18:37.419 }' 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.419 19:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.419 [2024-12-05 19:38:30.848347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:38.553 97.17 IOPS, 291.50 MiB/s [2024-12-05T19:38:31.994Z] [2024-12-05 19:38:31.694908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.553 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.553 "name": "raid_bdev1", 00:18:38.553 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:38.553 "strip_size_kb": 0, 00:18:38.553 "state": "online", 00:18:38.553 "raid_level": "raid1", 00:18:38.553 "superblock": true, 00:18:38.553 "num_base_bdevs": 4, 00:18:38.553 "num_base_bdevs_discovered": 3, 00:18:38.553 "num_base_bdevs_operational": 3, 00:18:38.553 "process": { 00:18:38.553 "type": "rebuild", 00:18:38.553 "target": "spare", 00:18:38.553 "progress": { 00:18:38.553 "blocks": 53248, 00:18:38.553 "percent": 83 00:18:38.553 } 00:18:38.553 }, 00:18:38.553 "base_bdevs_list": [ 00:18:38.553 { 00:18:38.553 "name": "spare", 00:18:38.553 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:38.553 "is_configured": true, 00:18:38.553 "data_offset": 2048, 00:18:38.553 "data_size": 63488 00:18:38.553 }, 00:18:38.553 { 00:18:38.553 "name": null, 00:18:38.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.553 "is_configured": false, 00:18:38.553 "data_offset": 0, 00:18:38.553 "data_size": 63488 00:18:38.553 }, 00:18:38.553 { 00:18:38.553 "name": "BaseBdev3", 00:18:38.553 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:38.553 "is_configured": true, 00:18:38.553 "data_offset": 2048, 00:18:38.553 "data_size": 63488 00:18:38.553 }, 00:18:38.554 { 00:18:38.554 "name": "BaseBdev4", 00:18:38.554 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:38.554 "is_configured": true, 00:18:38.554 "data_offset": 2048, 00:18:38.554 "data_size": 63488 00:18:38.554 } 00:18:38.554 ] 00:18:38.554 }' 00:18:38.554 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.554 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.554 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.812 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.812 19:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.070 88.43 IOPS, 265.29 MiB/s [2024-12-05T19:38:32.512Z] [2024-12-05 19:38:32.365072] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:39.071 [2024-12-05 19:38:32.472937] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:39.071 [2024-12-05 19:38:32.476331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.638 19:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.638 "name": "raid_bdev1", 00:18:39.638 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:39.638 "strip_size_kb": 0, 00:18:39.638 "state": "online", 00:18:39.638 "raid_level": "raid1", 00:18:39.638 "superblock": true, 00:18:39.638 "num_base_bdevs": 4, 00:18:39.638 "num_base_bdevs_discovered": 3, 00:18:39.638 "num_base_bdevs_operational": 3, 00:18:39.638 "base_bdevs_list": [ 00:18:39.638 { 00:18:39.638 "name": "spare", 00:18:39.638 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:39.638 "is_configured": true, 00:18:39.638 "data_offset": 2048, 00:18:39.638 "data_size": 63488 00:18:39.638 }, 00:18:39.638 { 00:18:39.638 "name": null, 00:18:39.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.638 "is_configured": false, 00:18:39.638 "data_offset": 0, 00:18:39.638 "data_size": 63488 00:18:39.638 }, 00:18:39.638 { 00:18:39.638 "name": "BaseBdev3", 00:18:39.638 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:39.638 "is_configured": true, 00:18:39.638 "data_offset": 2048, 00:18:39.638 "data_size": 63488 00:18:39.638 }, 00:18:39.638 { 00:18:39.638 "name": "BaseBdev4", 00:18:39.638 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:39.638 "is_configured": true, 00:18:39.638 "data_offset": 2048, 00:18:39.638 "data_size": 63488 00:18:39.638 } 00:18:39.638 ] 00:18:39.638 }' 00:18:39.638 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.896 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.897 "name": "raid_bdev1", 00:18:39.897 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:39.897 "strip_size_kb": 0, 00:18:39.897 "state": "online", 00:18:39.897 "raid_level": "raid1", 00:18:39.897 "superblock": true, 00:18:39.897 "num_base_bdevs": 4, 00:18:39.897 "num_base_bdevs_discovered": 3, 00:18:39.897 "num_base_bdevs_operational": 3, 00:18:39.897 "base_bdevs_list": [ 00:18:39.897 { 00:18:39.897 "name": "spare", 00:18:39.897 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": null, 00:18:39.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.897 "is_configured": false, 00:18:39.897 "data_offset": 0, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev3", 00:18:39.897 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev4", 00:18:39.897 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 } 00:18:39.897 ] 00:18:39.897 }' 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.897 82.50 IOPS, 247.50 MiB/s [2024-12-05T19:38:33.338Z] 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.897 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.155 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.155 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.155 "name": "raid_bdev1", 00:18:40.155 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:40.155 "strip_size_kb": 0, 00:18:40.155 "state": "online", 00:18:40.155 "raid_level": "raid1", 00:18:40.155 "superblock": true, 00:18:40.156 "num_base_bdevs": 4, 00:18:40.156 "num_base_bdevs_discovered": 3, 00:18:40.156 "num_base_bdevs_operational": 3, 00:18:40.156 "base_bdevs_list": [ 00:18:40.156 { 00:18:40.156 "name": "spare", 00:18:40.156 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:40.156 "is_configured": true, 00:18:40.156 "data_offset": 2048, 00:18:40.156 "data_size": 63488 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "name": null, 00:18:40.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.156 "is_configured": false, 00:18:40.156 "data_offset": 0, 00:18:40.156 "data_size": 63488 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "name": "BaseBdev3", 00:18:40.156 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:40.156 "is_configured": true, 00:18:40.156 "data_offset": 2048, 00:18:40.156 "data_size": 63488 00:18:40.156 }, 00:18:40.156 { 00:18:40.156 "name": "BaseBdev4", 00:18:40.156 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:40.156 "is_configured": true, 00:18:40.156 "data_offset": 2048, 00:18:40.156 "data_size": 63488 00:18:40.156 } 00:18:40.156 ] 00:18:40.156 }' 00:18:40.156 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.156 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.414 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.414 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.414 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.414 [2024-12-05 19:38:33.853628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.414 [2024-12-05 19:38:33.853808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.673 00:18:40.673 Latency(us) 00:18:40.673 [2024-12-05T19:38:34.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.673 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:40.673 raid_bdev1 : 8.62 79.33 237.99 0.00 0.00 17502.79 262.52 120109.61 00:18:40.673 [2024-12-05T19:38:34.114Z] =================================================================================================================== 00:18:40.673 [2024-12-05T19:38:34.114Z] Total : 79.33 237.99 0.00 0.00 17502.79 262.52 120109.61 00:18:40.673 [2024-12-05 19:38:33.953353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.673 [2024-12-05 19:38:33.953445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.673 [2024-12-05 19:38:33.953575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.673 [2024-12-05 19:38:33.953592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:40.673 { 00:18:40.673 "results": [ 00:18:40.673 { 00:18:40.673 "job": "raid_bdev1", 00:18:40.673 "core_mask": "0x1", 00:18:40.673 "workload": "randrw", 00:18:40.673 "percentage": 50, 00:18:40.673 "status": "finished", 00:18:40.673 "queue_depth": 2, 00:18:40.673 "io_size": 3145728, 00:18:40.673 "runtime": 8.622205, 00:18:40.673 "iops": 79.33005536286832, 00:18:40.673 "mibps": 237.99016608860495, 00:18:40.673 "io_failed": 0, 00:18:40.673 "io_timeout": 0, 00:18:40.673 "avg_latency_us": 17502.786687931948, 00:18:40.673 "min_latency_us": 262.5163636363636, 00:18:40.673 "max_latency_us": 120109.61454545455 00:18:40.673 } 00:18:40.673 ], 00:18:40.673 "core_count": 1 00:18:40.673 } 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.674 19:38:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.674 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:40.932 /dev/nbd0 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.932 1+0 records in 00:18:40.932 1+0 records out 00:18:40.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530069 s, 7.7 MB/s 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:40.932 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.191 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:41.450 /dev/nbd1 00:18:41.450 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:41.450 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:41.450 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:41.450 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.451 1+0 records in 00:18:41.451 1+0 records out 00:18:41.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399137 s, 10.3 MB/s 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.451 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.732 19:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.990 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:42.249 /dev/nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.249 1+0 records in 00:18:42.249 1+0 records out 00:18:42.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027844 s, 14.7 MB/s 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.249 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.507 19:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.766 [2024-12-05 19:38:36.196393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:42.766 [2024-12-05 19:38:36.196466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.766 [2024-12-05 19:38:36.196498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:42.766 [2024-12-05 19:38:36.196512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.766 [2024-12-05 19:38:36.199525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.766 [2024-12-05 19:38:36.199570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.766 [2024-12-05 19:38:36.199676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.766 [2024-12-05 19:38:36.199813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.766 [2024-12-05 19:38:36.199991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.766 [2024-12-05 19:38:36.200152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.766 spare 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.025 [2024-12-05 19:38:36.300353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:43.025 [2024-12-05 19:38:36.300424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.025 [2024-12-05 19:38:36.301172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:43.025 [2024-12-05 19:38:36.301580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:43.025 [2024-12-05 19:38:36.301745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:43.025 [2024-12-05 19:38:36.302022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.025 "name": "raid_bdev1", 00:18:43.025 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:43.025 "strip_size_kb": 0, 00:18:43.025 "state": "online", 00:18:43.025 "raid_level": "raid1", 00:18:43.025 "superblock": true, 00:18:43.025 "num_base_bdevs": 4, 00:18:43.025 "num_base_bdevs_discovered": 3, 00:18:43.025 "num_base_bdevs_operational": 3, 00:18:43.025 "base_bdevs_list": [ 00:18:43.025 { 00:18:43.025 "name": "spare", 00:18:43.025 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:43.025 "is_configured": true, 00:18:43.025 "data_offset": 2048, 00:18:43.025 "data_size": 63488 00:18:43.025 }, 00:18:43.025 { 00:18:43.025 "name": null, 00:18:43.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.025 "is_configured": false, 00:18:43.025 "data_offset": 2048, 00:18:43.025 "data_size": 63488 00:18:43.025 }, 00:18:43.025 { 00:18:43.025 "name": "BaseBdev3", 00:18:43.025 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:43.025 "is_configured": true, 00:18:43.025 "data_offset": 2048, 00:18:43.025 "data_size": 63488 00:18:43.025 }, 00:18:43.025 { 00:18:43.025 "name": "BaseBdev4", 00:18:43.025 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:43.025 "is_configured": true, 00:18:43.025 "data_offset": 2048, 00:18:43.025 "data_size": 63488 00:18:43.025 } 00:18:43.025 ] 00:18:43.025 }' 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.025 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.594 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.594 "name": "raid_bdev1", 00:18:43.594 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:43.594 "strip_size_kb": 0, 00:18:43.594 "state": "online", 00:18:43.594 "raid_level": "raid1", 00:18:43.594 "superblock": true, 00:18:43.595 "num_base_bdevs": 4, 00:18:43.595 "num_base_bdevs_discovered": 3, 00:18:43.595 "num_base_bdevs_operational": 3, 00:18:43.595 "base_bdevs_list": [ 00:18:43.595 { 00:18:43.595 "name": "spare", 00:18:43.595 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:43.595 "is_configured": true, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": null, 00:18:43.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.595 "is_configured": false, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": "BaseBdev3", 00:18:43.595 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:43.595 "is_configured": true, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": "BaseBdev4", 00:18:43.595 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:43.595 "is_configured": true, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 } 00:18:43.595 ] 00:18:43.595 }' 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.595 [2024-12-05 19:38:36.962247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.595 19:38:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.595 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.595 "name": "raid_bdev1", 00:18:43.595 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:43.595 "strip_size_kb": 0, 00:18:43.595 "state": "online", 00:18:43.595 "raid_level": "raid1", 00:18:43.595 "superblock": true, 00:18:43.595 "num_base_bdevs": 4, 00:18:43.595 "num_base_bdevs_discovered": 2, 00:18:43.595 "num_base_bdevs_operational": 2, 00:18:43.595 "base_bdevs_list": [ 00:18:43.595 { 00:18:43.595 "name": null, 00:18:43.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.595 "is_configured": false, 00:18:43.595 "data_offset": 0, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": null, 00:18:43.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.595 "is_configured": false, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": "BaseBdev3", 00:18:43.595 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:43.595 "is_configured": true, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 }, 00:18:43.595 { 00:18:43.595 "name": "BaseBdev4", 00:18:43.595 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:43.595 "is_configured": true, 00:18:43.595 "data_offset": 2048, 00:18:43.595 "data_size": 63488 00:18:43.595 } 00:18:43.595 ] 00:18:43.595 }' 00:18:43.595 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.595 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.161 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:44.161 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.161 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.161 [2024-12-05 19:38:37.478552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.161 [2024-12-05 19:38:37.478846] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:44.161 [2024-12-05 19:38:37.478870] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:44.161 [2024-12-05 19:38:37.478926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.161 [2024-12-05 19:38:37.493596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:44.161 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.161 19:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:44.161 [2024-12-05 19:38:37.496172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.096 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.354 "name": "raid_bdev1", 00:18:45.354 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:45.354 "strip_size_kb": 0, 00:18:45.354 "state": "online", 00:18:45.354 "raid_level": "raid1", 00:18:45.354 "superblock": true, 00:18:45.354 "num_base_bdevs": 4, 00:18:45.354 "num_base_bdevs_discovered": 3, 00:18:45.354 "num_base_bdevs_operational": 3, 00:18:45.354 "process": { 00:18:45.354 "type": "rebuild", 00:18:45.354 "target": "spare", 00:18:45.354 "progress": { 00:18:45.354 "blocks": 20480, 00:18:45.354 "percent": 32 00:18:45.354 } 00:18:45.354 }, 00:18:45.354 "base_bdevs_list": [ 00:18:45.354 { 00:18:45.354 "name": "spare", 00:18:45.354 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:45.354 "is_configured": true, 00:18:45.354 "data_offset": 2048, 00:18:45.354 "data_size": 63488 00:18:45.354 }, 00:18:45.354 { 00:18:45.354 "name": null, 00:18:45.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.354 "is_configured": false, 00:18:45.354 "data_offset": 2048, 00:18:45.354 "data_size": 63488 00:18:45.354 }, 00:18:45.354 { 00:18:45.354 "name": "BaseBdev3", 00:18:45.354 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:45.354 "is_configured": true, 00:18:45.354 "data_offset": 2048, 00:18:45.354 "data_size": 63488 00:18:45.354 }, 00:18:45.354 { 00:18:45.354 "name": "BaseBdev4", 00:18:45.354 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:45.354 "is_configured": true, 00:18:45.354 "data_offset": 2048, 00:18:45.354 "data_size": 63488 00:18:45.354 } 00:18:45.354 ] 00:18:45.354 }' 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.354 [2024-12-05 19:38:38.666523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.354 [2024-12-05 19:38:38.705640] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:45.354 [2024-12-05 19:38:38.705990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.354 [2024-12-05 19:38:38.706139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.354 [2024-12-05 19:38:38.706191] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.354 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.355 "name": "raid_bdev1", 00:18:45.355 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:45.355 "strip_size_kb": 0, 00:18:45.355 "state": "online", 00:18:45.355 "raid_level": "raid1", 00:18:45.355 "superblock": true, 00:18:45.355 "num_base_bdevs": 4, 00:18:45.355 "num_base_bdevs_discovered": 2, 00:18:45.355 "num_base_bdevs_operational": 2, 00:18:45.355 "base_bdevs_list": [ 00:18:45.355 { 00:18:45.355 "name": null, 00:18:45.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.355 "is_configured": false, 00:18:45.355 "data_offset": 0, 00:18:45.355 "data_size": 63488 00:18:45.355 }, 00:18:45.355 { 00:18:45.355 "name": null, 00:18:45.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.355 "is_configured": false, 00:18:45.355 "data_offset": 2048, 00:18:45.355 "data_size": 63488 00:18:45.355 }, 00:18:45.355 { 00:18:45.355 "name": "BaseBdev3", 00:18:45.355 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:45.355 "is_configured": true, 00:18:45.355 "data_offset": 2048, 00:18:45.355 "data_size": 63488 00:18:45.355 }, 00:18:45.355 { 00:18:45.355 "name": "BaseBdev4", 00:18:45.355 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:45.355 "is_configured": true, 00:18:45.355 "data_offset": 2048, 00:18:45.355 "data_size": 63488 00:18:45.355 } 00:18:45.355 ] 00:18:45.355 }' 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.355 19:38:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.922 19:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.922 19:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.922 19:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.922 [2024-12-05 19:38:39.260909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.922 [2024-12-05 19:38:39.260988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.922 [2024-12-05 19:38:39.261033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:45.922 [2024-12-05 19:38:39.261049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.922 [2024-12-05 19:38:39.261667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.922 [2024-12-05 19:38:39.261727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.922 [2024-12-05 19:38:39.261854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.922 [2024-12-05 19:38:39.261875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:45.922 [2024-12-05 19:38:39.261891] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.922 [2024-12-05 19:38:39.261921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.922 [2024-12-05 19:38:39.276164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:45.922 spare 00:18:45.922 19:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.922 19:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:45.922 [2024-12-05 19:38:39.278601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.855 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.113 "name": "raid_bdev1", 00:18:47.113 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:47.113 "strip_size_kb": 0, 00:18:47.113 "state": "online", 00:18:47.113 "raid_level": "raid1", 00:18:47.113 "superblock": true, 00:18:47.113 "num_base_bdevs": 4, 00:18:47.113 "num_base_bdevs_discovered": 3, 00:18:47.113 "num_base_bdevs_operational": 3, 00:18:47.113 "process": { 00:18:47.113 "type": "rebuild", 00:18:47.113 "target": "spare", 00:18:47.113 "progress": { 00:18:47.113 "blocks": 20480, 00:18:47.113 "percent": 32 00:18:47.113 } 00:18:47.113 }, 00:18:47.113 "base_bdevs_list": [ 00:18:47.113 { 00:18:47.113 "name": "spare", 00:18:47.113 "uuid": "f3e986c3-abdc-5dbb-aba0-c39727e9879c", 00:18:47.113 "is_configured": true, 00:18:47.113 "data_offset": 2048, 00:18:47.113 "data_size": 63488 00:18:47.113 }, 00:18:47.113 { 00:18:47.113 "name": null, 00:18:47.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.113 "is_configured": false, 00:18:47.113 "data_offset": 2048, 00:18:47.113 "data_size": 63488 00:18:47.113 }, 00:18:47.113 { 00:18:47.113 "name": "BaseBdev3", 00:18:47.113 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:47.113 "is_configured": true, 00:18:47.113 "data_offset": 2048, 00:18:47.113 "data_size": 63488 00:18:47.113 }, 00:18:47.113 { 00:18:47.113 "name": "BaseBdev4", 00:18:47.113 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:47.113 "is_configured": true, 00:18:47.113 "data_offset": 2048, 00:18:47.113 "data_size": 63488 00:18:47.113 } 00:18:47.113 ] 00:18:47.113 }' 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 [2024-12-05 19:38:40.444625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.113 [2024-12-05 19:38:40.487908] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.113 [2024-12-05 19:38:40.487998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.113 [2024-12-05 19:38:40.488024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.113 [2024-12-05 19:38:40.488040] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.113 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.371 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.371 "name": "raid_bdev1", 00:18:47.371 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:47.371 "strip_size_kb": 0, 00:18:47.371 "state": "online", 00:18:47.371 "raid_level": "raid1", 00:18:47.371 "superblock": true, 00:18:47.371 "num_base_bdevs": 4, 00:18:47.371 "num_base_bdevs_discovered": 2, 00:18:47.371 "num_base_bdevs_operational": 2, 00:18:47.371 "base_bdevs_list": [ 00:18:47.371 { 00:18:47.371 "name": null, 00:18:47.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.371 "is_configured": false, 00:18:47.371 "data_offset": 0, 00:18:47.371 "data_size": 63488 00:18:47.371 }, 00:18:47.371 { 00:18:47.371 "name": null, 00:18:47.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.371 "is_configured": false, 00:18:47.371 "data_offset": 2048, 00:18:47.371 "data_size": 63488 00:18:47.371 }, 00:18:47.371 { 00:18:47.371 "name": "BaseBdev3", 00:18:47.371 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:47.371 "is_configured": true, 00:18:47.371 "data_offset": 2048, 00:18:47.371 "data_size": 63488 00:18:47.371 }, 00:18:47.371 { 00:18:47.371 "name": "BaseBdev4", 00:18:47.371 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:47.371 "is_configured": true, 00:18:47.371 "data_offset": 2048, 00:18:47.371 "data_size": 63488 00:18:47.371 } 00:18:47.371 ] 00:18:47.371 }' 00:18:47.371 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.371 19:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.630 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.889 "name": "raid_bdev1", 00:18:47.889 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:47.889 "strip_size_kb": 0, 00:18:47.889 "state": "online", 00:18:47.889 "raid_level": "raid1", 00:18:47.889 "superblock": true, 00:18:47.889 "num_base_bdevs": 4, 00:18:47.889 "num_base_bdevs_discovered": 2, 00:18:47.889 "num_base_bdevs_operational": 2, 00:18:47.889 "base_bdevs_list": [ 00:18:47.889 { 00:18:47.889 "name": null, 00:18:47.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.889 "is_configured": false, 00:18:47.889 "data_offset": 0, 00:18:47.889 "data_size": 63488 00:18:47.889 }, 00:18:47.889 { 00:18:47.889 "name": null, 00:18:47.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.889 "is_configured": false, 00:18:47.889 "data_offset": 2048, 00:18:47.889 "data_size": 63488 00:18:47.889 }, 00:18:47.889 { 00:18:47.889 "name": "BaseBdev3", 00:18:47.889 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:47.889 "is_configured": true, 00:18:47.889 "data_offset": 2048, 00:18:47.889 "data_size": 63488 00:18:47.889 }, 00:18:47.889 { 00:18:47.889 "name": "BaseBdev4", 00:18:47.889 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:47.889 "is_configured": true, 00:18:47.889 "data_offset": 2048, 00:18:47.889 "data_size": 63488 00:18:47.889 } 00:18:47.889 ] 00:18:47.889 }' 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.889 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.889 [2024-12-05 19:38:41.230981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.889 [2024-12-05 19:38:41.231053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.889 [2024-12-05 19:38:41.231097] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:47.889 [2024-12-05 19:38:41.231150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.890 [2024-12-05 19:38:41.231799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.890 [2024-12-05 19:38:41.231847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.890 [2024-12-05 19:38:41.231950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:47.890 [2024-12-05 19:38:41.231976] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:47.890 [2024-12-05 19:38:41.231992] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:47.890 [2024-12-05 19:38:41.232007] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:47.890 BaseBdev1 00:18:47.890 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.890 19:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.826 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.085 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.085 "name": "raid_bdev1", 00:18:49.085 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:49.085 "strip_size_kb": 0, 00:18:49.085 "state": "online", 00:18:49.085 "raid_level": "raid1", 00:18:49.085 "superblock": true, 00:18:49.085 "num_base_bdevs": 4, 00:18:49.085 "num_base_bdevs_discovered": 2, 00:18:49.085 "num_base_bdevs_operational": 2, 00:18:49.085 "base_bdevs_list": [ 00:18:49.085 { 00:18:49.085 "name": null, 00:18:49.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.085 "is_configured": false, 00:18:49.085 "data_offset": 0, 00:18:49.085 "data_size": 63488 00:18:49.085 }, 00:18:49.085 { 00:18:49.085 "name": null, 00:18:49.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.085 "is_configured": false, 00:18:49.085 "data_offset": 2048, 00:18:49.085 "data_size": 63488 00:18:49.085 }, 00:18:49.085 { 00:18:49.085 "name": "BaseBdev3", 00:18:49.085 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:49.085 "is_configured": true, 00:18:49.085 "data_offset": 2048, 00:18:49.085 "data_size": 63488 00:18:49.085 }, 00:18:49.085 { 00:18:49.085 "name": "BaseBdev4", 00:18:49.086 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:49.086 "is_configured": true, 00:18:49.086 "data_offset": 2048, 00:18:49.086 "data_size": 63488 00:18:49.086 } 00:18:49.086 ] 00:18:49.086 }' 00:18:49.086 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.086 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.345 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.604 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.604 "name": "raid_bdev1", 00:18:49.604 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:49.604 "strip_size_kb": 0, 00:18:49.604 "state": "online", 00:18:49.604 "raid_level": "raid1", 00:18:49.604 "superblock": true, 00:18:49.604 "num_base_bdevs": 4, 00:18:49.604 "num_base_bdevs_discovered": 2, 00:18:49.604 "num_base_bdevs_operational": 2, 00:18:49.604 "base_bdevs_list": [ 00:18:49.604 { 00:18:49.604 "name": null, 00:18:49.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.604 "is_configured": false, 00:18:49.604 "data_offset": 0, 00:18:49.604 "data_size": 63488 00:18:49.604 }, 00:18:49.604 { 00:18:49.604 "name": null, 00:18:49.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.604 "is_configured": false, 00:18:49.604 "data_offset": 2048, 00:18:49.604 "data_size": 63488 00:18:49.604 }, 00:18:49.604 { 00:18:49.604 "name": "BaseBdev3", 00:18:49.604 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:49.604 "is_configured": true, 00:18:49.604 "data_offset": 2048, 00:18:49.604 "data_size": 63488 00:18:49.604 }, 00:18:49.604 { 00:18:49.604 "name": "BaseBdev4", 00:18:49.604 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:49.604 "is_configured": true, 00:18:49.605 "data_offset": 2048, 00:18:49.605 "data_size": 63488 00:18:49.605 } 00:18:49.605 ] 00:18:49.605 }' 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.605 [2024-12-05 19:38:42.927813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.605 [2024-12-05 19:38:42.928155] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:49.605 [2024-12-05 19:38:42.928183] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:49.605 request: 00:18:49.605 { 00:18:49.605 "base_bdev": "BaseBdev1", 00:18:49.605 "raid_bdev": "raid_bdev1", 00:18:49.605 "method": "bdev_raid_add_base_bdev", 00:18:49.605 "req_id": 1 00:18:49.605 } 00:18:49.605 Got JSON-RPC error response 00:18:49.605 response: 00:18:49.605 { 00:18:49.605 "code": -22, 00:18:49.605 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:49.605 } 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.605 19:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.541 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.542 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.801 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.801 "name": "raid_bdev1", 00:18:50.801 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:50.801 "strip_size_kb": 0, 00:18:50.801 "state": "online", 00:18:50.801 "raid_level": "raid1", 00:18:50.801 "superblock": true, 00:18:50.801 "num_base_bdevs": 4, 00:18:50.801 "num_base_bdevs_discovered": 2, 00:18:50.801 "num_base_bdevs_operational": 2, 00:18:50.801 "base_bdevs_list": [ 00:18:50.801 { 00:18:50.801 "name": null, 00:18:50.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.801 "is_configured": false, 00:18:50.801 "data_offset": 0, 00:18:50.801 "data_size": 63488 00:18:50.801 }, 00:18:50.801 { 00:18:50.801 "name": null, 00:18:50.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.801 "is_configured": false, 00:18:50.801 "data_offset": 2048, 00:18:50.801 "data_size": 63488 00:18:50.801 }, 00:18:50.801 { 00:18:50.801 "name": "BaseBdev3", 00:18:50.801 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:50.801 "is_configured": true, 00:18:50.801 "data_offset": 2048, 00:18:50.801 "data_size": 63488 00:18:50.801 }, 00:18:50.801 { 00:18:50.801 "name": "BaseBdev4", 00:18:50.801 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:50.801 "is_configured": true, 00:18:50.801 "data_offset": 2048, 00:18:50.801 "data_size": 63488 00:18:50.801 } 00:18:50.801 ] 00:18:50.801 }' 00:18:50.801 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.801 19:38:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.060 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.319 "name": "raid_bdev1", 00:18:51.319 "uuid": "d394293a-2e17-4cd6-848d-2c5fe03d2597", 00:18:51.319 "strip_size_kb": 0, 00:18:51.319 "state": "online", 00:18:51.319 "raid_level": "raid1", 00:18:51.319 "superblock": true, 00:18:51.319 "num_base_bdevs": 4, 00:18:51.319 "num_base_bdevs_discovered": 2, 00:18:51.319 "num_base_bdevs_operational": 2, 00:18:51.319 "base_bdevs_list": [ 00:18:51.319 { 00:18:51.319 "name": null, 00:18:51.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.319 "is_configured": false, 00:18:51.319 "data_offset": 0, 00:18:51.319 "data_size": 63488 00:18:51.319 }, 00:18:51.319 { 00:18:51.319 "name": null, 00:18:51.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.319 "is_configured": false, 00:18:51.319 "data_offset": 2048, 00:18:51.319 "data_size": 63488 00:18:51.319 }, 00:18:51.319 { 00:18:51.319 "name": "BaseBdev3", 00:18:51.319 "uuid": "2e998981-11fe-59e0-8cb9-fbec734d7877", 00:18:51.319 "is_configured": true, 00:18:51.319 "data_offset": 2048, 00:18:51.319 "data_size": 63488 00:18:51.319 }, 00:18:51.319 { 00:18:51.319 "name": "BaseBdev4", 00:18:51.319 "uuid": "7951793d-b1d3-57cc-a8c8-88962abd64bc", 00:18:51.319 "is_configured": true, 00:18:51.319 "data_offset": 2048, 00:18:51.319 "data_size": 63488 00:18:51.319 } 00:18:51.319 ] 00:18:51.319 }' 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79504 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79504 ']' 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79504 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79504 00:18:51.319 killing process with pid 79504 00:18:51.319 Received shutdown signal, test time was about 19.348922 seconds 00:18:51.319 00:18:51.319 Latency(us) 00:18:51.319 [2024-12-05T19:38:44.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.319 [2024-12-05T19:38:44.760Z] =================================================================================================================== 00:18:51.319 [2024-12-05T19:38:44.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.319 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.320 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79504' 00:18:51.320 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79504 00:18:51.320 [2024-12-05 19:38:44.660306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.320 19:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79504 00:18:51.320 [2024-12-05 19:38:44.660451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.320 [2024-12-05 19:38:44.660573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.320 [2024-12-05 19:38:44.660588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:51.899 [2024-12-05 19:38:45.026017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.842 19:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:52.842 00:18:52.842 real 0m23.060s 00:18:52.842 user 0m31.452s 00:18:52.842 sys 0m2.416s 00:18:52.842 19:38:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.842 19:38:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.842 ************************************ 00:18:52.842 END TEST raid_rebuild_test_sb_io 00:18:52.842 ************************************ 00:18:52.842 19:38:46 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:52.842 19:38:46 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:52.842 19:38:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:52.842 19:38:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.842 19:38:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.842 ************************************ 00:18:52.842 START TEST raid5f_state_function_test 00:18:52.842 ************************************ 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80236 00:18:52.842 Process raid pid: 80236 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80236' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80236 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80236 ']' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.842 19:38:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.101 [2024-12-05 19:38:46.306332] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:18:53.101 [2024-12-05 19:38:46.306514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.101 [2024-12-05 19:38:46.494064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.360 [2024-12-05 19:38:46.625090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.619 [2024-12-05 19:38:46.829349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.619 [2024-12-05 19:38:46.829397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.879 [2024-12-05 19:38:47.313082] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.879 [2024-12-05 19:38:47.313214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.879 [2024-12-05 19:38:47.313230] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.879 [2024-12-05 19:38:47.313245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.879 [2024-12-05 19:38:47.313259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:53.879 [2024-12-05 19:38:47.313273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.879 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.138 "name": "Existed_Raid", 00:18:54.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.138 "strip_size_kb": 64, 00:18:54.138 "state": "configuring", 00:18:54.138 "raid_level": "raid5f", 00:18:54.138 "superblock": false, 00:18:54.138 "num_base_bdevs": 3, 00:18:54.138 "num_base_bdevs_discovered": 0, 00:18:54.138 "num_base_bdevs_operational": 3, 00:18:54.138 "base_bdevs_list": [ 00:18:54.138 { 00:18:54.138 "name": "BaseBdev1", 00:18:54.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.138 "is_configured": false, 00:18:54.138 "data_offset": 0, 00:18:54.138 "data_size": 0 00:18:54.138 }, 00:18:54.138 { 00:18:54.138 "name": "BaseBdev2", 00:18:54.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.138 "is_configured": false, 00:18:54.138 "data_offset": 0, 00:18:54.138 "data_size": 0 00:18:54.138 }, 00:18:54.138 { 00:18:54.138 "name": "BaseBdev3", 00:18:54.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.138 "is_configured": false, 00:18:54.138 "data_offset": 0, 00:18:54.138 "data_size": 0 00:18:54.138 } 00:18:54.138 ] 00:18:54.138 }' 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.138 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 [2024-12-05 19:38:47.841238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.470 [2024-12-05 19:38:47.841300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 [2024-12-05 19:38:47.853278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.470 [2024-12-05 19:38:47.853377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.470 [2024-12-05 19:38:47.853399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.470 [2024-12-05 19:38:47.853422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.470 [2024-12-05 19:38:47.853436] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.470 [2024-12-05 19:38:47.853455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.470 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.745 [2024-12-05 19:38:47.899461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.745 BaseBdev1 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.745 [ 00:18:54.745 { 00:18:54.745 "name": "BaseBdev1", 00:18:54.745 "aliases": [ 00:18:54.745 "7242f813-7c16-4a52-9712-d586ad3b56cf" 00:18:54.745 ], 00:18:54.745 "product_name": "Malloc disk", 00:18:54.745 "block_size": 512, 00:18:54.745 "num_blocks": 65536, 00:18:54.745 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:54.745 "assigned_rate_limits": { 00:18:54.745 "rw_ios_per_sec": 0, 00:18:54.745 "rw_mbytes_per_sec": 0, 00:18:54.745 "r_mbytes_per_sec": 0, 00:18:54.745 "w_mbytes_per_sec": 0 00:18:54.745 }, 00:18:54.745 "claimed": true, 00:18:54.745 "claim_type": "exclusive_write", 00:18:54.745 "zoned": false, 00:18:54.745 "supported_io_types": { 00:18:54.745 "read": true, 00:18:54.745 "write": true, 00:18:54.745 "unmap": true, 00:18:54.745 "flush": true, 00:18:54.745 "reset": true, 00:18:54.745 "nvme_admin": false, 00:18:54.745 "nvme_io": false, 00:18:54.745 "nvme_io_md": false, 00:18:54.745 "write_zeroes": true, 00:18:54.745 "zcopy": true, 00:18:54.745 "get_zone_info": false, 00:18:54.745 "zone_management": false, 00:18:54.745 "zone_append": false, 00:18:54.745 "compare": false, 00:18:54.745 "compare_and_write": false, 00:18:54.745 "abort": true, 00:18:54.745 "seek_hole": false, 00:18:54.745 "seek_data": false, 00:18:54.745 "copy": true, 00:18:54.745 "nvme_iov_md": false 00:18:54.745 }, 00:18:54.745 "memory_domains": [ 00:18:54.745 { 00:18:54.745 "dma_device_id": "system", 00:18:54.745 "dma_device_type": 1 00:18:54.745 }, 00:18:54.745 { 00:18:54.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.745 "dma_device_type": 2 00:18:54.745 } 00:18:54.745 ], 00:18:54.745 "driver_specific": {} 00:18:54.745 } 00:18:54.745 ] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.745 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.745 "name": "Existed_Raid", 00:18:54.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.746 "strip_size_kb": 64, 00:18:54.746 "state": "configuring", 00:18:54.746 "raid_level": "raid5f", 00:18:54.746 "superblock": false, 00:18:54.746 "num_base_bdevs": 3, 00:18:54.746 "num_base_bdevs_discovered": 1, 00:18:54.746 "num_base_bdevs_operational": 3, 00:18:54.746 "base_bdevs_list": [ 00:18:54.746 { 00:18:54.746 "name": "BaseBdev1", 00:18:54.746 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:54.746 "is_configured": true, 00:18:54.746 "data_offset": 0, 00:18:54.746 "data_size": 65536 00:18:54.746 }, 00:18:54.746 { 00:18:54.746 "name": "BaseBdev2", 00:18:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.746 "is_configured": false, 00:18:54.746 "data_offset": 0, 00:18:54.746 "data_size": 0 00:18:54.746 }, 00:18:54.746 { 00:18:54.746 "name": "BaseBdev3", 00:18:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.746 "is_configured": false, 00:18:54.746 "data_offset": 0, 00:18:54.746 "data_size": 0 00:18:54.746 } 00:18:54.746 ] 00:18:54.746 }' 00:18:54.746 19:38:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.746 19:38:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.004 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.004 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.004 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 [2024-12-05 19:38:48.447678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.263 [2024-12-05 19:38:48.447776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 [2024-12-05 19:38:48.455750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.263 [2024-12-05 19:38:48.458238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.263 [2024-12-05 19:38:48.458306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.263 [2024-12-05 19:38:48.458337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.263 [2024-12-05 19:38:48.458352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.263 "name": "Existed_Raid", 00:18:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.263 "strip_size_kb": 64, 00:18:55.263 "state": "configuring", 00:18:55.263 "raid_level": "raid5f", 00:18:55.263 "superblock": false, 00:18:55.263 "num_base_bdevs": 3, 00:18:55.263 "num_base_bdevs_discovered": 1, 00:18:55.263 "num_base_bdevs_operational": 3, 00:18:55.263 "base_bdevs_list": [ 00:18:55.263 { 00:18:55.263 "name": "BaseBdev1", 00:18:55.263 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:55.263 "is_configured": true, 00:18:55.263 "data_offset": 0, 00:18:55.263 "data_size": 65536 00:18:55.263 }, 00:18:55.263 { 00:18:55.263 "name": "BaseBdev2", 00:18:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.263 "is_configured": false, 00:18:55.263 "data_offset": 0, 00:18:55.263 "data_size": 0 00:18:55.263 }, 00:18:55.263 { 00:18:55.263 "name": "BaseBdev3", 00:18:55.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.263 "is_configured": false, 00:18:55.263 "data_offset": 0, 00:18:55.263 "data_size": 0 00:18:55.263 } 00:18:55.263 ] 00:18:55.263 }' 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.263 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 19:38:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:55.830 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.830 19:38:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 [2024-12-05 19:38:49.035396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.830 BaseBdev2 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.830 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.830 [ 00:18:55.830 { 00:18:55.830 "name": "BaseBdev2", 00:18:55.830 "aliases": [ 00:18:55.830 "01351689-5f41-4cd7-bcf8-b140230ead36" 00:18:55.830 ], 00:18:55.830 "product_name": "Malloc disk", 00:18:55.830 "block_size": 512, 00:18:55.830 "num_blocks": 65536, 00:18:55.830 "uuid": "01351689-5f41-4cd7-bcf8-b140230ead36", 00:18:55.830 "assigned_rate_limits": { 00:18:55.830 "rw_ios_per_sec": 0, 00:18:55.830 "rw_mbytes_per_sec": 0, 00:18:55.830 "r_mbytes_per_sec": 0, 00:18:55.830 "w_mbytes_per_sec": 0 00:18:55.830 }, 00:18:55.830 "claimed": true, 00:18:55.830 "claim_type": "exclusive_write", 00:18:55.830 "zoned": false, 00:18:55.830 "supported_io_types": { 00:18:55.830 "read": true, 00:18:55.830 "write": true, 00:18:55.830 "unmap": true, 00:18:55.830 "flush": true, 00:18:55.830 "reset": true, 00:18:55.830 "nvme_admin": false, 00:18:55.830 "nvme_io": false, 00:18:55.830 "nvme_io_md": false, 00:18:55.830 "write_zeroes": true, 00:18:55.830 "zcopy": true, 00:18:55.830 "get_zone_info": false, 00:18:55.830 "zone_management": false, 00:18:55.830 "zone_append": false, 00:18:55.830 "compare": false, 00:18:55.830 "compare_and_write": false, 00:18:55.830 "abort": true, 00:18:55.830 "seek_hole": false, 00:18:55.830 "seek_data": false, 00:18:55.830 "copy": true, 00:18:55.830 "nvme_iov_md": false 00:18:55.830 }, 00:18:55.830 "memory_domains": [ 00:18:55.830 { 00:18:55.831 "dma_device_id": "system", 00:18:55.831 "dma_device_type": 1 00:18:55.831 }, 00:18:55.831 { 00:18:55.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.831 "dma_device_type": 2 00:18:55.831 } 00:18:55.831 ], 00:18:55.831 "driver_specific": {} 00:18:55.831 } 00:18:55.831 ] 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.831 "name": "Existed_Raid", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.831 "strip_size_kb": 64, 00:18:55.831 "state": "configuring", 00:18:55.831 "raid_level": "raid5f", 00:18:55.831 "superblock": false, 00:18:55.831 "num_base_bdevs": 3, 00:18:55.831 "num_base_bdevs_discovered": 2, 00:18:55.831 "num_base_bdevs_operational": 3, 00:18:55.831 "base_bdevs_list": [ 00:18:55.831 { 00:18:55.831 "name": "BaseBdev1", 00:18:55.831 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:55.831 "is_configured": true, 00:18:55.831 "data_offset": 0, 00:18:55.831 "data_size": 65536 00:18:55.831 }, 00:18:55.831 { 00:18:55.831 "name": "BaseBdev2", 00:18:55.831 "uuid": "01351689-5f41-4cd7-bcf8-b140230ead36", 00:18:55.831 "is_configured": true, 00:18:55.831 "data_offset": 0, 00:18:55.831 "data_size": 65536 00:18:55.831 }, 00:18:55.831 { 00:18:55.831 "name": "BaseBdev3", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.831 "is_configured": false, 00:18:55.831 "data_offset": 0, 00:18:55.831 "data_size": 0 00:18:55.831 } 00:18:55.831 ] 00:18:55.831 }' 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.831 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 [2024-12-05 19:38:49.638667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.399 [2024-12-05 19:38:49.639048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:56.399 [2024-12-05 19:38:49.639083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:56.399 [2024-12-05 19:38:49.639424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:56.399 [2024-12-05 19:38:49.645272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:56.399 [2024-12-05 19:38:49.645426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:56.399 [2024-12-05 19:38:49.645905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.399 BaseBdev3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 [ 00:18:56.399 { 00:18:56.399 "name": "BaseBdev3", 00:18:56.399 "aliases": [ 00:18:56.399 "b5bf49bc-ddf6-4727-a4d2-531221af6d1c" 00:18:56.399 ], 00:18:56.399 "product_name": "Malloc disk", 00:18:56.399 "block_size": 512, 00:18:56.399 "num_blocks": 65536, 00:18:56.399 "uuid": "b5bf49bc-ddf6-4727-a4d2-531221af6d1c", 00:18:56.399 "assigned_rate_limits": { 00:18:56.399 "rw_ios_per_sec": 0, 00:18:56.399 "rw_mbytes_per_sec": 0, 00:18:56.399 "r_mbytes_per_sec": 0, 00:18:56.399 "w_mbytes_per_sec": 0 00:18:56.399 }, 00:18:56.399 "claimed": true, 00:18:56.399 "claim_type": "exclusive_write", 00:18:56.399 "zoned": false, 00:18:56.399 "supported_io_types": { 00:18:56.399 "read": true, 00:18:56.399 "write": true, 00:18:56.399 "unmap": true, 00:18:56.399 "flush": true, 00:18:56.399 "reset": true, 00:18:56.399 "nvme_admin": false, 00:18:56.399 "nvme_io": false, 00:18:56.399 "nvme_io_md": false, 00:18:56.399 "write_zeroes": true, 00:18:56.399 "zcopy": true, 00:18:56.399 "get_zone_info": false, 00:18:56.399 "zone_management": false, 00:18:56.399 "zone_append": false, 00:18:56.399 "compare": false, 00:18:56.399 "compare_and_write": false, 00:18:56.399 "abort": true, 00:18:56.399 "seek_hole": false, 00:18:56.399 "seek_data": false, 00:18:56.399 "copy": true, 00:18:56.399 "nvme_iov_md": false 00:18:56.399 }, 00:18:56.399 "memory_domains": [ 00:18:56.399 { 00:18:56.399 "dma_device_id": "system", 00:18:56.399 "dma_device_type": 1 00:18:56.399 }, 00:18:56.399 { 00:18:56.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.399 "dma_device_type": 2 00:18:56.399 } 00:18:56.399 ], 00:18:56.399 "driver_specific": {} 00:18:56.399 } 00:18:56.399 ] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.399 "name": "Existed_Raid", 00:18:56.399 "uuid": "af483ddb-f275-4a94-98ae-84ab6f431565", 00:18:56.399 "strip_size_kb": 64, 00:18:56.399 "state": "online", 00:18:56.399 "raid_level": "raid5f", 00:18:56.399 "superblock": false, 00:18:56.399 "num_base_bdevs": 3, 00:18:56.399 "num_base_bdevs_discovered": 3, 00:18:56.399 "num_base_bdevs_operational": 3, 00:18:56.399 "base_bdevs_list": [ 00:18:56.399 { 00:18:56.399 "name": "BaseBdev1", 00:18:56.399 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:56.399 "is_configured": true, 00:18:56.399 "data_offset": 0, 00:18:56.399 "data_size": 65536 00:18:56.399 }, 00:18:56.399 { 00:18:56.399 "name": "BaseBdev2", 00:18:56.399 "uuid": "01351689-5f41-4cd7-bcf8-b140230ead36", 00:18:56.399 "is_configured": true, 00:18:56.399 "data_offset": 0, 00:18:56.399 "data_size": 65536 00:18:56.399 }, 00:18:56.399 { 00:18:56.399 "name": "BaseBdev3", 00:18:56.399 "uuid": "b5bf49bc-ddf6-4727-a4d2-531221af6d1c", 00:18:56.399 "is_configured": true, 00:18:56.399 "data_offset": 0, 00:18:56.399 "data_size": 65536 00:18:56.399 } 00:18:56.399 ] 00:18:56.399 }' 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.399 19:38:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.967 [2024-12-05 19:38:50.212366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.967 "name": "Existed_Raid", 00:18:56.967 "aliases": [ 00:18:56.967 "af483ddb-f275-4a94-98ae-84ab6f431565" 00:18:56.967 ], 00:18:56.967 "product_name": "Raid Volume", 00:18:56.967 "block_size": 512, 00:18:56.967 "num_blocks": 131072, 00:18:56.967 "uuid": "af483ddb-f275-4a94-98ae-84ab6f431565", 00:18:56.967 "assigned_rate_limits": { 00:18:56.967 "rw_ios_per_sec": 0, 00:18:56.967 "rw_mbytes_per_sec": 0, 00:18:56.967 "r_mbytes_per_sec": 0, 00:18:56.967 "w_mbytes_per_sec": 0 00:18:56.967 }, 00:18:56.967 "claimed": false, 00:18:56.967 "zoned": false, 00:18:56.967 "supported_io_types": { 00:18:56.967 "read": true, 00:18:56.967 "write": true, 00:18:56.967 "unmap": false, 00:18:56.967 "flush": false, 00:18:56.967 "reset": true, 00:18:56.967 "nvme_admin": false, 00:18:56.967 "nvme_io": false, 00:18:56.967 "nvme_io_md": false, 00:18:56.967 "write_zeroes": true, 00:18:56.967 "zcopy": false, 00:18:56.967 "get_zone_info": false, 00:18:56.967 "zone_management": false, 00:18:56.967 "zone_append": false, 00:18:56.967 "compare": false, 00:18:56.967 "compare_and_write": false, 00:18:56.967 "abort": false, 00:18:56.967 "seek_hole": false, 00:18:56.967 "seek_data": false, 00:18:56.967 "copy": false, 00:18:56.967 "nvme_iov_md": false 00:18:56.967 }, 00:18:56.967 "driver_specific": { 00:18:56.967 "raid": { 00:18:56.967 "uuid": "af483ddb-f275-4a94-98ae-84ab6f431565", 00:18:56.967 "strip_size_kb": 64, 00:18:56.967 "state": "online", 00:18:56.967 "raid_level": "raid5f", 00:18:56.967 "superblock": false, 00:18:56.967 "num_base_bdevs": 3, 00:18:56.967 "num_base_bdevs_discovered": 3, 00:18:56.967 "num_base_bdevs_operational": 3, 00:18:56.967 "base_bdevs_list": [ 00:18:56.967 { 00:18:56.967 "name": "BaseBdev1", 00:18:56.967 "uuid": "7242f813-7c16-4a52-9712-d586ad3b56cf", 00:18:56.967 "is_configured": true, 00:18:56.967 "data_offset": 0, 00:18:56.967 "data_size": 65536 00:18:56.967 }, 00:18:56.967 { 00:18:56.967 "name": "BaseBdev2", 00:18:56.967 "uuid": "01351689-5f41-4cd7-bcf8-b140230ead36", 00:18:56.967 "is_configured": true, 00:18:56.967 "data_offset": 0, 00:18:56.967 "data_size": 65536 00:18:56.967 }, 00:18:56.967 { 00:18:56.967 "name": "BaseBdev3", 00:18:56.967 "uuid": "b5bf49bc-ddf6-4727-a4d2-531221af6d1c", 00:18:56.967 "is_configured": true, 00:18:56.967 "data_offset": 0, 00:18:56.967 "data_size": 65536 00:18:56.967 } 00:18:56.967 ] 00:18:56.967 } 00:18:56.967 } 00:18:56.967 }' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:56.967 BaseBdev2 00:18:56.967 BaseBdev3' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.967 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.226 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 [2024-12-05 19:38:50.544186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.487 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.487 "name": "Existed_Raid", 00:18:57.487 "uuid": "af483ddb-f275-4a94-98ae-84ab6f431565", 00:18:57.487 "strip_size_kb": 64, 00:18:57.487 "state": "online", 00:18:57.487 "raid_level": "raid5f", 00:18:57.487 "superblock": false, 00:18:57.487 "num_base_bdevs": 3, 00:18:57.487 "num_base_bdevs_discovered": 2, 00:18:57.487 "num_base_bdevs_operational": 2, 00:18:57.487 "base_bdevs_list": [ 00:18:57.487 { 00:18:57.487 "name": null, 00:18:57.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.487 "is_configured": false, 00:18:57.487 "data_offset": 0, 00:18:57.487 "data_size": 65536 00:18:57.487 }, 00:18:57.487 { 00:18:57.487 "name": "BaseBdev2", 00:18:57.487 "uuid": "01351689-5f41-4cd7-bcf8-b140230ead36", 00:18:57.487 "is_configured": true, 00:18:57.487 "data_offset": 0, 00:18:57.487 "data_size": 65536 00:18:57.487 }, 00:18:57.487 { 00:18:57.487 "name": "BaseBdev3", 00:18:57.487 "uuid": "b5bf49bc-ddf6-4727-a4d2-531221af6d1c", 00:18:57.487 "is_configured": true, 00:18:57.487 "data_offset": 0, 00:18:57.487 "data_size": 65536 00:18:57.487 } 00:18:57.487 ] 00:18:57.487 }' 00:18:57.487 19:38:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.487 19:38:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.746 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.005 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.005 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.005 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.006 [2024-12-05 19:38:51.211507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.006 [2024-12-05 19:38:51.211675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.006 [2024-12-05 19:38:51.295493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.006 [2024-12-05 19:38:51.355523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.006 [2024-12-05 19:38:51.355597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.006 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 BaseBdev2 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 [ 00:18:58.266 { 00:18:58.266 "name": "BaseBdev2", 00:18:58.266 "aliases": [ 00:18:58.266 "b51e9a93-dbc3-41ba-9360-367175e6981f" 00:18:58.266 ], 00:18:58.266 "product_name": "Malloc disk", 00:18:58.266 "block_size": 512, 00:18:58.266 "num_blocks": 65536, 00:18:58.266 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:18:58.266 "assigned_rate_limits": { 00:18:58.266 "rw_ios_per_sec": 0, 00:18:58.266 "rw_mbytes_per_sec": 0, 00:18:58.266 "r_mbytes_per_sec": 0, 00:18:58.266 "w_mbytes_per_sec": 0 00:18:58.266 }, 00:18:58.266 "claimed": false, 00:18:58.266 "zoned": false, 00:18:58.266 "supported_io_types": { 00:18:58.266 "read": true, 00:18:58.266 "write": true, 00:18:58.266 "unmap": true, 00:18:58.266 "flush": true, 00:18:58.266 "reset": true, 00:18:58.266 "nvme_admin": false, 00:18:58.266 "nvme_io": false, 00:18:58.266 "nvme_io_md": false, 00:18:58.266 "write_zeroes": true, 00:18:58.266 "zcopy": true, 00:18:58.266 "get_zone_info": false, 00:18:58.266 "zone_management": false, 00:18:58.266 "zone_append": false, 00:18:58.266 "compare": false, 00:18:58.266 "compare_and_write": false, 00:18:58.266 "abort": true, 00:18:58.266 "seek_hole": false, 00:18:58.266 "seek_data": false, 00:18:58.266 "copy": true, 00:18:58.266 "nvme_iov_md": false 00:18:58.266 }, 00:18:58.266 "memory_domains": [ 00:18:58.266 { 00:18:58.266 "dma_device_id": "system", 00:18:58.266 "dma_device_type": 1 00:18:58.266 }, 00:18:58.266 { 00:18:58.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.266 "dma_device_type": 2 00:18:58.266 } 00:18:58.266 ], 00:18:58.266 "driver_specific": {} 00:18:58.266 } 00:18:58.266 ] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 BaseBdev3 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.266 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.266 [ 00:18:58.266 { 00:18:58.266 "name": "BaseBdev3", 00:18:58.266 "aliases": [ 00:18:58.266 "18b185e5-b147-46fe-b12e-996af63c1adf" 00:18:58.266 ], 00:18:58.266 "product_name": "Malloc disk", 00:18:58.266 "block_size": 512, 00:18:58.266 "num_blocks": 65536, 00:18:58.266 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:18:58.266 "assigned_rate_limits": { 00:18:58.266 "rw_ios_per_sec": 0, 00:18:58.266 "rw_mbytes_per_sec": 0, 00:18:58.266 "r_mbytes_per_sec": 0, 00:18:58.266 "w_mbytes_per_sec": 0 00:18:58.266 }, 00:18:58.266 "claimed": false, 00:18:58.266 "zoned": false, 00:18:58.266 "supported_io_types": { 00:18:58.266 "read": true, 00:18:58.266 "write": true, 00:18:58.266 "unmap": true, 00:18:58.266 "flush": true, 00:18:58.266 "reset": true, 00:18:58.266 "nvme_admin": false, 00:18:58.267 "nvme_io": false, 00:18:58.267 "nvme_io_md": false, 00:18:58.267 "write_zeroes": true, 00:18:58.267 "zcopy": true, 00:18:58.267 "get_zone_info": false, 00:18:58.267 "zone_management": false, 00:18:58.267 "zone_append": false, 00:18:58.267 "compare": false, 00:18:58.267 "compare_and_write": false, 00:18:58.267 "abort": true, 00:18:58.267 "seek_hole": false, 00:18:58.267 "seek_data": false, 00:18:58.267 "copy": true, 00:18:58.267 "nvme_iov_md": false 00:18:58.267 }, 00:18:58.267 "memory_domains": [ 00:18:58.267 { 00:18:58.267 "dma_device_id": "system", 00:18:58.267 "dma_device_type": 1 00:18:58.267 }, 00:18:58.267 { 00:18:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.267 "dma_device_type": 2 00:18:58.267 } 00:18:58.267 ], 00:18:58.267 "driver_specific": {} 00:18:58.267 } 00:18:58.267 ] 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.267 [2024-12-05 19:38:51.646970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.267 [2024-12-05 19:38:51.647023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.267 [2024-12-05 19:38:51.647054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.267 [2024-12-05 19:38:51.649641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.267 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.526 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.526 "name": "Existed_Raid", 00:18:58.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.526 "strip_size_kb": 64, 00:18:58.526 "state": "configuring", 00:18:58.526 "raid_level": "raid5f", 00:18:58.526 "superblock": false, 00:18:58.526 "num_base_bdevs": 3, 00:18:58.526 "num_base_bdevs_discovered": 2, 00:18:58.526 "num_base_bdevs_operational": 3, 00:18:58.526 "base_bdevs_list": [ 00:18:58.526 { 00:18:58.526 "name": "BaseBdev1", 00:18:58.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.526 "is_configured": false, 00:18:58.526 "data_offset": 0, 00:18:58.526 "data_size": 0 00:18:58.526 }, 00:18:58.526 { 00:18:58.526 "name": "BaseBdev2", 00:18:58.526 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:18:58.526 "is_configured": true, 00:18:58.526 "data_offset": 0, 00:18:58.526 "data_size": 65536 00:18:58.526 }, 00:18:58.526 { 00:18:58.526 "name": "BaseBdev3", 00:18:58.526 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:18:58.526 "is_configured": true, 00:18:58.526 "data_offset": 0, 00:18:58.526 "data_size": 65536 00:18:58.526 } 00:18:58.526 ] 00:18:58.526 }' 00:18:58.526 19:38:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.526 19:38:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.785 [2024-12-05 19:38:52.199148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.785 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.044 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.044 "name": "Existed_Raid", 00:18:59.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.044 "strip_size_kb": 64, 00:18:59.044 "state": "configuring", 00:18:59.044 "raid_level": "raid5f", 00:18:59.044 "superblock": false, 00:18:59.044 "num_base_bdevs": 3, 00:18:59.044 "num_base_bdevs_discovered": 1, 00:18:59.044 "num_base_bdevs_operational": 3, 00:18:59.044 "base_bdevs_list": [ 00:18:59.044 { 00:18:59.044 "name": "BaseBdev1", 00:18:59.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.044 "is_configured": false, 00:18:59.044 "data_offset": 0, 00:18:59.044 "data_size": 0 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "name": null, 00:18:59.044 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:18:59.044 "is_configured": false, 00:18:59.044 "data_offset": 0, 00:18:59.044 "data_size": 65536 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "name": "BaseBdev3", 00:18:59.044 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:18:59.044 "is_configured": true, 00:18:59.044 "data_offset": 0, 00:18:59.044 "data_size": 65536 00:18:59.044 } 00:18:59.044 ] 00:18:59.044 }' 00:18:59.044 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.044 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.303 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.303 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:59.303 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.303 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.303 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 [2024-12-05 19:38:52.787295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.561 BaseBdev1 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.561 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 [ 00:18:59.561 { 00:18:59.561 "name": "BaseBdev1", 00:18:59.561 "aliases": [ 00:18:59.561 "2d777460-ce4a-4098-8bde-436275c007f1" 00:18:59.561 ], 00:18:59.561 "product_name": "Malloc disk", 00:18:59.561 "block_size": 512, 00:18:59.561 "num_blocks": 65536, 00:18:59.561 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:18:59.561 "assigned_rate_limits": { 00:18:59.561 "rw_ios_per_sec": 0, 00:18:59.561 "rw_mbytes_per_sec": 0, 00:18:59.561 "r_mbytes_per_sec": 0, 00:18:59.561 "w_mbytes_per_sec": 0 00:18:59.561 }, 00:18:59.561 "claimed": true, 00:18:59.561 "claim_type": "exclusive_write", 00:18:59.561 "zoned": false, 00:18:59.561 "supported_io_types": { 00:18:59.561 "read": true, 00:18:59.561 "write": true, 00:18:59.561 "unmap": true, 00:18:59.561 "flush": true, 00:18:59.561 "reset": true, 00:18:59.561 "nvme_admin": false, 00:18:59.561 "nvme_io": false, 00:18:59.561 "nvme_io_md": false, 00:18:59.561 "write_zeroes": true, 00:18:59.561 "zcopy": true, 00:18:59.562 "get_zone_info": false, 00:18:59.562 "zone_management": false, 00:18:59.562 "zone_append": false, 00:18:59.562 "compare": false, 00:18:59.562 "compare_and_write": false, 00:18:59.562 "abort": true, 00:18:59.562 "seek_hole": false, 00:18:59.562 "seek_data": false, 00:18:59.562 "copy": true, 00:18:59.562 "nvme_iov_md": false 00:18:59.562 }, 00:18:59.562 "memory_domains": [ 00:18:59.562 { 00:18:59.562 "dma_device_id": "system", 00:18:59.562 "dma_device_type": 1 00:18:59.562 }, 00:18:59.562 { 00:18:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.562 "dma_device_type": 2 00:18:59.562 } 00:18:59.562 ], 00:18:59.562 "driver_specific": {} 00:18:59.562 } 00:18:59.562 ] 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.562 "name": "Existed_Raid", 00:18:59.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.562 "strip_size_kb": 64, 00:18:59.562 "state": "configuring", 00:18:59.562 "raid_level": "raid5f", 00:18:59.562 "superblock": false, 00:18:59.562 "num_base_bdevs": 3, 00:18:59.562 "num_base_bdevs_discovered": 2, 00:18:59.562 "num_base_bdevs_operational": 3, 00:18:59.562 "base_bdevs_list": [ 00:18:59.562 { 00:18:59.562 "name": "BaseBdev1", 00:18:59.562 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:18:59.562 "is_configured": true, 00:18:59.562 "data_offset": 0, 00:18:59.562 "data_size": 65536 00:18:59.562 }, 00:18:59.562 { 00:18:59.562 "name": null, 00:18:59.562 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:18:59.562 "is_configured": false, 00:18:59.562 "data_offset": 0, 00:18:59.562 "data_size": 65536 00:18:59.562 }, 00:18:59.562 { 00:18:59.562 "name": "BaseBdev3", 00:18:59.562 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:18:59.562 "is_configured": true, 00:18:59.562 "data_offset": 0, 00:18:59.562 "data_size": 65536 00:18:59.562 } 00:18:59.562 ] 00:18:59.562 }' 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.562 19:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.130 [2024-12-05 19:38:53.395607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.130 "name": "Existed_Raid", 00:19:00.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.130 "strip_size_kb": 64, 00:19:00.130 "state": "configuring", 00:19:00.130 "raid_level": "raid5f", 00:19:00.130 "superblock": false, 00:19:00.130 "num_base_bdevs": 3, 00:19:00.130 "num_base_bdevs_discovered": 1, 00:19:00.130 "num_base_bdevs_operational": 3, 00:19:00.130 "base_bdevs_list": [ 00:19:00.130 { 00:19:00.130 "name": "BaseBdev1", 00:19:00.130 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:00.130 "is_configured": true, 00:19:00.130 "data_offset": 0, 00:19:00.130 "data_size": 65536 00:19:00.130 }, 00:19:00.130 { 00:19:00.130 "name": null, 00:19:00.130 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:00.130 "is_configured": false, 00:19:00.130 "data_offset": 0, 00:19:00.130 "data_size": 65536 00:19:00.130 }, 00:19:00.130 { 00:19:00.130 "name": null, 00:19:00.130 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:00.130 "is_configured": false, 00:19:00.130 "data_offset": 0, 00:19:00.130 "data_size": 65536 00:19:00.130 } 00:19:00.130 ] 00:19:00.130 }' 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.130 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:00.698 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.699 [2024-12-05 19:38:53.947849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.699 19:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.699 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.699 "name": "Existed_Raid", 00:19:00.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.699 "strip_size_kb": 64, 00:19:00.699 "state": "configuring", 00:19:00.699 "raid_level": "raid5f", 00:19:00.699 "superblock": false, 00:19:00.699 "num_base_bdevs": 3, 00:19:00.699 "num_base_bdevs_discovered": 2, 00:19:00.699 "num_base_bdevs_operational": 3, 00:19:00.699 "base_bdevs_list": [ 00:19:00.699 { 00:19:00.699 "name": "BaseBdev1", 00:19:00.699 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:00.699 "is_configured": true, 00:19:00.699 "data_offset": 0, 00:19:00.699 "data_size": 65536 00:19:00.699 }, 00:19:00.699 { 00:19:00.699 "name": null, 00:19:00.699 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:00.699 "is_configured": false, 00:19:00.699 "data_offset": 0, 00:19:00.699 "data_size": 65536 00:19:00.699 }, 00:19:00.699 { 00:19:00.699 "name": "BaseBdev3", 00:19:00.699 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:00.699 "is_configured": true, 00:19:00.699 "data_offset": 0, 00:19:00.699 "data_size": 65536 00:19:00.699 } 00:19:00.699 ] 00:19:00.699 }' 00:19:00.699 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.699 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.268 [2024-12-05 19:38:54.524075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.268 "name": "Existed_Raid", 00:19:01.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.268 "strip_size_kb": 64, 00:19:01.268 "state": "configuring", 00:19:01.268 "raid_level": "raid5f", 00:19:01.268 "superblock": false, 00:19:01.268 "num_base_bdevs": 3, 00:19:01.268 "num_base_bdevs_discovered": 1, 00:19:01.268 "num_base_bdevs_operational": 3, 00:19:01.268 "base_bdevs_list": [ 00:19:01.268 { 00:19:01.268 "name": null, 00:19:01.268 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:01.268 "is_configured": false, 00:19:01.268 "data_offset": 0, 00:19:01.268 "data_size": 65536 00:19:01.268 }, 00:19:01.268 { 00:19:01.268 "name": null, 00:19:01.268 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:01.268 "is_configured": false, 00:19:01.268 "data_offset": 0, 00:19:01.268 "data_size": 65536 00:19:01.268 }, 00:19:01.268 { 00:19:01.268 "name": "BaseBdev3", 00:19:01.268 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:01.268 "is_configured": true, 00:19:01.268 "data_offset": 0, 00:19:01.268 "data_size": 65536 00:19:01.268 } 00:19:01.268 ] 00:19:01.268 }' 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.268 19:38:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.836 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.837 [2024-12-05 19:38:55.185518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.837 "name": "Existed_Raid", 00:19:01.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.837 "strip_size_kb": 64, 00:19:01.837 "state": "configuring", 00:19:01.837 "raid_level": "raid5f", 00:19:01.837 "superblock": false, 00:19:01.837 "num_base_bdevs": 3, 00:19:01.837 "num_base_bdevs_discovered": 2, 00:19:01.837 "num_base_bdevs_operational": 3, 00:19:01.837 "base_bdevs_list": [ 00:19:01.837 { 00:19:01.837 "name": null, 00:19:01.837 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:01.837 "is_configured": false, 00:19:01.837 "data_offset": 0, 00:19:01.837 "data_size": 65536 00:19:01.837 }, 00:19:01.837 { 00:19:01.837 "name": "BaseBdev2", 00:19:01.837 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:01.837 "is_configured": true, 00:19:01.837 "data_offset": 0, 00:19:01.837 "data_size": 65536 00:19:01.837 }, 00:19:01.837 { 00:19:01.837 "name": "BaseBdev3", 00:19:01.837 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:01.837 "is_configured": true, 00:19:01.837 "data_offset": 0, 00:19:01.837 "data_size": 65536 00:19:01.837 } 00:19:01.837 ] 00:19:01.837 }' 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.837 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d777460-ce4a-4098-8bde-436275c007f1 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.435 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.695 [2024-12-05 19:38:55.897612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:02.695 [2024-12-05 19:38:55.897687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:02.695 [2024-12-05 19:38:55.897702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:02.695 [2024-12-05 19:38:55.898048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:02.695 [2024-12-05 19:38:55.903013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:02.695 [2024-12-05 19:38:55.903040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:02.695 [2024-12-05 19:38:55.903417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.695 NewBaseBdev 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.695 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.695 [ 00:19:02.695 { 00:19:02.695 "name": "NewBaseBdev", 00:19:02.695 "aliases": [ 00:19:02.695 "2d777460-ce4a-4098-8bde-436275c007f1" 00:19:02.695 ], 00:19:02.695 "product_name": "Malloc disk", 00:19:02.695 "block_size": 512, 00:19:02.695 "num_blocks": 65536, 00:19:02.695 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:02.695 "assigned_rate_limits": { 00:19:02.695 "rw_ios_per_sec": 0, 00:19:02.695 "rw_mbytes_per_sec": 0, 00:19:02.695 "r_mbytes_per_sec": 0, 00:19:02.695 "w_mbytes_per_sec": 0 00:19:02.695 }, 00:19:02.695 "claimed": true, 00:19:02.695 "claim_type": "exclusive_write", 00:19:02.695 "zoned": false, 00:19:02.695 "supported_io_types": { 00:19:02.695 "read": true, 00:19:02.695 "write": true, 00:19:02.695 "unmap": true, 00:19:02.695 "flush": true, 00:19:02.695 "reset": true, 00:19:02.695 "nvme_admin": false, 00:19:02.695 "nvme_io": false, 00:19:02.695 "nvme_io_md": false, 00:19:02.695 "write_zeroes": true, 00:19:02.695 "zcopy": true, 00:19:02.695 "get_zone_info": false, 00:19:02.695 "zone_management": false, 00:19:02.695 "zone_append": false, 00:19:02.695 "compare": false, 00:19:02.695 "compare_and_write": false, 00:19:02.695 "abort": true, 00:19:02.695 "seek_hole": false, 00:19:02.695 "seek_data": false, 00:19:02.695 "copy": true, 00:19:02.695 "nvme_iov_md": false 00:19:02.695 }, 00:19:02.695 "memory_domains": [ 00:19:02.695 { 00:19:02.695 "dma_device_id": "system", 00:19:02.695 "dma_device_type": 1 00:19:02.695 }, 00:19:02.695 { 00:19:02.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.695 "dma_device_type": 2 00:19:02.695 } 00:19:02.695 ], 00:19:02.695 "driver_specific": {} 00:19:02.695 } 00:19:02.695 ] 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.696 "name": "Existed_Raid", 00:19:02.696 "uuid": "c3833379-795b-4aea-8c19-9b4a62e9af75", 00:19:02.696 "strip_size_kb": 64, 00:19:02.696 "state": "online", 00:19:02.696 "raid_level": "raid5f", 00:19:02.696 "superblock": false, 00:19:02.696 "num_base_bdevs": 3, 00:19:02.696 "num_base_bdevs_discovered": 3, 00:19:02.696 "num_base_bdevs_operational": 3, 00:19:02.696 "base_bdevs_list": [ 00:19:02.696 { 00:19:02.696 "name": "NewBaseBdev", 00:19:02.696 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:02.696 "is_configured": true, 00:19:02.696 "data_offset": 0, 00:19:02.696 "data_size": 65536 00:19:02.696 }, 00:19:02.696 { 00:19:02.696 "name": "BaseBdev2", 00:19:02.696 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:02.696 "is_configured": true, 00:19:02.696 "data_offset": 0, 00:19:02.696 "data_size": 65536 00:19:02.696 }, 00:19:02.696 { 00:19:02.696 "name": "BaseBdev3", 00:19:02.696 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:02.696 "is_configured": true, 00:19:02.696 "data_offset": 0, 00:19:02.696 "data_size": 65536 00:19:02.696 } 00:19:02.696 ] 00:19:02.696 }' 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.696 19:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.264 [2024-12-05 19:38:56.489513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.264 "name": "Existed_Raid", 00:19:03.264 "aliases": [ 00:19:03.264 "c3833379-795b-4aea-8c19-9b4a62e9af75" 00:19:03.264 ], 00:19:03.264 "product_name": "Raid Volume", 00:19:03.264 "block_size": 512, 00:19:03.264 "num_blocks": 131072, 00:19:03.264 "uuid": "c3833379-795b-4aea-8c19-9b4a62e9af75", 00:19:03.264 "assigned_rate_limits": { 00:19:03.264 "rw_ios_per_sec": 0, 00:19:03.264 "rw_mbytes_per_sec": 0, 00:19:03.264 "r_mbytes_per_sec": 0, 00:19:03.264 "w_mbytes_per_sec": 0 00:19:03.264 }, 00:19:03.264 "claimed": false, 00:19:03.264 "zoned": false, 00:19:03.264 "supported_io_types": { 00:19:03.264 "read": true, 00:19:03.264 "write": true, 00:19:03.264 "unmap": false, 00:19:03.264 "flush": false, 00:19:03.264 "reset": true, 00:19:03.264 "nvme_admin": false, 00:19:03.264 "nvme_io": false, 00:19:03.264 "nvme_io_md": false, 00:19:03.264 "write_zeroes": true, 00:19:03.264 "zcopy": false, 00:19:03.264 "get_zone_info": false, 00:19:03.264 "zone_management": false, 00:19:03.264 "zone_append": false, 00:19:03.264 "compare": false, 00:19:03.264 "compare_and_write": false, 00:19:03.264 "abort": false, 00:19:03.264 "seek_hole": false, 00:19:03.264 "seek_data": false, 00:19:03.264 "copy": false, 00:19:03.264 "nvme_iov_md": false 00:19:03.264 }, 00:19:03.264 "driver_specific": { 00:19:03.264 "raid": { 00:19:03.264 "uuid": "c3833379-795b-4aea-8c19-9b4a62e9af75", 00:19:03.264 "strip_size_kb": 64, 00:19:03.264 "state": "online", 00:19:03.264 "raid_level": "raid5f", 00:19:03.264 "superblock": false, 00:19:03.264 "num_base_bdevs": 3, 00:19:03.264 "num_base_bdevs_discovered": 3, 00:19:03.264 "num_base_bdevs_operational": 3, 00:19:03.264 "base_bdevs_list": [ 00:19:03.264 { 00:19:03.264 "name": "NewBaseBdev", 00:19:03.264 "uuid": "2d777460-ce4a-4098-8bde-436275c007f1", 00:19:03.264 "is_configured": true, 00:19:03.264 "data_offset": 0, 00:19:03.264 "data_size": 65536 00:19:03.264 }, 00:19:03.264 { 00:19:03.264 "name": "BaseBdev2", 00:19:03.264 "uuid": "b51e9a93-dbc3-41ba-9360-367175e6981f", 00:19:03.264 "is_configured": true, 00:19:03.264 "data_offset": 0, 00:19:03.264 "data_size": 65536 00:19:03.264 }, 00:19:03.264 { 00:19:03.264 "name": "BaseBdev3", 00:19:03.264 "uuid": "18b185e5-b147-46fe-b12e-996af63c1adf", 00:19:03.264 "is_configured": true, 00:19:03.264 "data_offset": 0, 00:19:03.264 "data_size": 65536 00:19:03.264 } 00:19:03.264 ] 00:19:03.264 } 00:19:03.264 } 00:19:03.264 }' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:03.264 BaseBdev2 00:19:03.264 BaseBdev3' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.264 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.523 [2024-12-05 19:38:56.833425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.523 [2024-12-05 19:38:56.833487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.523 [2024-12-05 19:38:56.833583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.523 [2024-12-05 19:38:56.834090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.523 [2024-12-05 19:38:56.834233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80236 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80236 ']' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80236 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80236 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80236' 00:19:03.523 killing process with pid 80236 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80236 00:19:03.523 [2024-12-05 19:38:56.871938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.523 19:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80236 00:19:03.782 [2024-12-05 19:38:57.145181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:05.159 00:19:05.159 real 0m12.010s 00:19:05.159 user 0m19.932s 00:19:05.159 sys 0m1.699s 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.159 ************************************ 00:19:05.159 END TEST raid5f_state_function_test 00:19:05.159 ************************************ 00:19:05.159 19:38:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:05.159 19:38:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:05.159 19:38:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.159 19:38:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.159 ************************************ 00:19:05.159 START TEST raid5f_state_function_test_sb 00:19:05.159 ************************************ 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:05.159 Process raid pid: 80870 00:19:05.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80870 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80870' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80870 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80870 ']' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.159 19:38:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.159 [2024-12-05 19:38:58.383022] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:19:05.159 [2024-12-05 19:38:58.383742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.159 [2024-12-05 19:38:58.574730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.417 [2024-12-05 19:38:58.708768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.675 [2024-12-05 19:38:58.922450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.675 [2024-12-05 19:38:58.922761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.934 [2024-12-05 19:38:59.366970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.934 [2024-12-05 19:38:59.367039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.934 [2024-12-05 19:38:59.367057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.934 [2024-12-05 19:38:59.367074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.934 [2024-12-05 19:38:59.367084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:05.934 [2024-12-05 19:38:59.367099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.934 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.193 "name": "Existed_Raid", 00:19:06.193 "uuid": "7d95bfa5-510d-43e5-a6f1-fc1484dc7af3", 00:19:06.193 "strip_size_kb": 64, 00:19:06.193 "state": "configuring", 00:19:06.193 "raid_level": "raid5f", 00:19:06.193 "superblock": true, 00:19:06.193 "num_base_bdevs": 3, 00:19:06.193 "num_base_bdevs_discovered": 0, 00:19:06.193 "num_base_bdevs_operational": 3, 00:19:06.193 "base_bdevs_list": [ 00:19:06.193 { 00:19:06.193 "name": "BaseBdev1", 00:19:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.193 "is_configured": false, 00:19:06.193 "data_offset": 0, 00:19:06.193 "data_size": 0 00:19:06.193 }, 00:19:06.193 { 00:19:06.193 "name": "BaseBdev2", 00:19:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.193 "is_configured": false, 00:19:06.193 "data_offset": 0, 00:19:06.193 "data_size": 0 00:19:06.193 }, 00:19:06.193 { 00:19:06.193 "name": "BaseBdev3", 00:19:06.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.193 "is_configured": false, 00:19:06.193 "data_offset": 0, 00:19:06.193 "data_size": 0 00:19:06.193 } 00:19:06.193 ] 00:19:06.193 }' 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.193 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 [2024-12-05 19:38:59.863138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.452 [2024-12-05 19:38:59.863349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 [2024-12-05 19:38:59.871061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.452 [2024-12-05 19:38:59.871127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.452 [2024-12-05 19:38:59.871144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.452 [2024-12-05 19:38:59.871168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.452 [2024-12-05 19:38:59.871178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.452 [2024-12-05 19:38:59.871192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.452 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 [2024-12-05 19:38:59.917520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.711 BaseBdev1 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 [ 00:19:06.711 { 00:19:06.711 "name": "BaseBdev1", 00:19:06.711 "aliases": [ 00:19:06.711 "39b3fffc-e5ba-43f6-ba62-38929f4c6809" 00:19:06.711 ], 00:19:06.711 "product_name": "Malloc disk", 00:19:06.711 "block_size": 512, 00:19:06.711 "num_blocks": 65536, 00:19:06.711 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:06.711 "assigned_rate_limits": { 00:19:06.711 "rw_ios_per_sec": 0, 00:19:06.711 "rw_mbytes_per_sec": 0, 00:19:06.711 "r_mbytes_per_sec": 0, 00:19:06.711 "w_mbytes_per_sec": 0 00:19:06.711 }, 00:19:06.711 "claimed": true, 00:19:06.711 "claim_type": "exclusive_write", 00:19:06.711 "zoned": false, 00:19:06.711 "supported_io_types": { 00:19:06.711 "read": true, 00:19:06.711 "write": true, 00:19:06.711 "unmap": true, 00:19:06.711 "flush": true, 00:19:06.711 "reset": true, 00:19:06.711 "nvme_admin": false, 00:19:06.711 "nvme_io": false, 00:19:06.711 "nvme_io_md": false, 00:19:06.711 "write_zeroes": true, 00:19:06.711 "zcopy": true, 00:19:06.711 "get_zone_info": false, 00:19:06.711 "zone_management": false, 00:19:06.711 "zone_append": false, 00:19:06.711 "compare": false, 00:19:06.711 "compare_and_write": false, 00:19:06.711 "abort": true, 00:19:06.711 "seek_hole": false, 00:19:06.711 "seek_data": false, 00:19:06.711 "copy": true, 00:19:06.711 "nvme_iov_md": false 00:19:06.711 }, 00:19:06.711 "memory_domains": [ 00:19:06.711 { 00:19:06.711 "dma_device_id": "system", 00:19:06.711 "dma_device_type": 1 00:19:06.711 }, 00:19:06.711 { 00:19:06.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.711 "dma_device_type": 2 00:19:06.711 } 00:19:06.711 ], 00:19:06.711 "driver_specific": {} 00:19:06.711 } 00:19:06.711 ] 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 19:38:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.711 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.711 "name": "Existed_Raid", 00:19:06.711 "uuid": "45a4d2f7-f751-47d6-a650-f214df00b31b", 00:19:06.711 "strip_size_kb": 64, 00:19:06.711 "state": "configuring", 00:19:06.711 "raid_level": "raid5f", 00:19:06.711 "superblock": true, 00:19:06.711 "num_base_bdevs": 3, 00:19:06.711 "num_base_bdevs_discovered": 1, 00:19:06.711 "num_base_bdevs_operational": 3, 00:19:06.711 "base_bdevs_list": [ 00:19:06.711 { 00:19:06.711 "name": "BaseBdev1", 00:19:06.711 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:06.711 "is_configured": true, 00:19:06.711 "data_offset": 2048, 00:19:06.711 "data_size": 63488 00:19:06.711 }, 00:19:06.711 { 00:19:06.711 "name": "BaseBdev2", 00:19:06.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.711 "is_configured": false, 00:19:06.711 "data_offset": 0, 00:19:06.711 "data_size": 0 00:19:06.711 }, 00:19:06.711 { 00:19:06.711 "name": "BaseBdev3", 00:19:06.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.711 "is_configured": false, 00:19:06.711 "data_offset": 0, 00:19:06.711 "data_size": 0 00:19:06.711 } 00:19:06.711 ] 00:19:06.711 }' 00:19:06.711 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.711 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.279 [2024-12-05 19:39:00.465802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.279 [2024-12-05 19:39:00.465871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.279 [2024-12-05 19:39:00.473846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.279 [2024-12-05 19:39:00.476378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.279 [2024-12-05 19:39:00.476429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.279 [2024-12-05 19:39:00.476446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.279 [2024-12-05 19:39:00.476460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.279 "name": "Existed_Raid", 00:19:07.279 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:07.279 "strip_size_kb": 64, 00:19:07.279 "state": "configuring", 00:19:07.279 "raid_level": "raid5f", 00:19:07.279 "superblock": true, 00:19:07.279 "num_base_bdevs": 3, 00:19:07.279 "num_base_bdevs_discovered": 1, 00:19:07.279 "num_base_bdevs_operational": 3, 00:19:07.279 "base_bdevs_list": [ 00:19:07.279 { 00:19:07.279 "name": "BaseBdev1", 00:19:07.279 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:07.279 "is_configured": true, 00:19:07.279 "data_offset": 2048, 00:19:07.279 "data_size": 63488 00:19:07.279 }, 00:19:07.279 { 00:19:07.279 "name": "BaseBdev2", 00:19:07.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.279 "is_configured": false, 00:19:07.279 "data_offset": 0, 00:19:07.279 "data_size": 0 00:19:07.279 }, 00:19:07.279 { 00:19:07.279 "name": "BaseBdev3", 00:19:07.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.279 "is_configured": false, 00:19:07.279 "data_offset": 0, 00:19:07.279 "data_size": 0 00:19:07.279 } 00:19:07.279 ] 00:19:07.279 }' 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.279 19:39:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 [2024-12-05 19:39:01.065596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.847 BaseBdev2 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 [ 00:19:07.847 { 00:19:07.847 "name": "BaseBdev2", 00:19:07.847 "aliases": [ 00:19:07.847 "78a56a62-b4f6-4548-9614-e7ebddc99521" 00:19:07.847 ], 00:19:07.847 "product_name": "Malloc disk", 00:19:07.847 "block_size": 512, 00:19:07.847 "num_blocks": 65536, 00:19:07.847 "uuid": "78a56a62-b4f6-4548-9614-e7ebddc99521", 00:19:07.847 "assigned_rate_limits": { 00:19:07.847 "rw_ios_per_sec": 0, 00:19:07.847 "rw_mbytes_per_sec": 0, 00:19:07.847 "r_mbytes_per_sec": 0, 00:19:07.847 "w_mbytes_per_sec": 0 00:19:07.847 }, 00:19:07.847 "claimed": true, 00:19:07.847 "claim_type": "exclusive_write", 00:19:07.847 "zoned": false, 00:19:07.847 "supported_io_types": { 00:19:07.847 "read": true, 00:19:07.847 "write": true, 00:19:07.847 "unmap": true, 00:19:07.847 "flush": true, 00:19:07.847 "reset": true, 00:19:07.847 "nvme_admin": false, 00:19:07.847 "nvme_io": false, 00:19:07.847 "nvme_io_md": false, 00:19:07.847 "write_zeroes": true, 00:19:07.847 "zcopy": true, 00:19:07.847 "get_zone_info": false, 00:19:07.847 "zone_management": false, 00:19:07.847 "zone_append": false, 00:19:07.847 "compare": false, 00:19:07.847 "compare_and_write": false, 00:19:07.847 "abort": true, 00:19:07.847 "seek_hole": false, 00:19:07.847 "seek_data": false, 00:19:07.847 "copy": true, 00:19:07.847 "nvme_iov_md": false 00:19:07.847 }, 00:19:07.847 "memory_domains": [ 00:19:07.847 { 00:19:07.847 "dma_device_id": "system", 00:19:07.847 "dma_device_type": 1 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.847 "dma_device_type": 2 00:19:07.847 } 00:19:07.847 ], 00:19:07.847 "driver_specific": {} 00:19:07.847 } 00:19:07.847 ] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.847 "name": "Existed_Raid", 00:19:07.847 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:07.847 "strip_size_kb": 64, 00:19:07.847 "state": "configuring", 00:19:07.847 "raid_level": "raid5f", 00:19:07.847 "superblock": true, 00:19:07.847 "num_base_bdevs": 3, 00:19:07.847 "num_base_bdevs_discovered": 2, 00:19:07.847 "num_base_bdevs_operational": 3, 00:19:07.847 "base_bdevs_list": [ 00:19:07.847 { 00:19:07.847 "name": "BaseBdev1", 00:19:07.847 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "name": "BaseBdev2", 00:19:07.847 "uuid": "78a56a62-b4f6-4548-9614-e7ebddc99521", 00:19:07.847 "is_configured": true, 00:19:07.847 "data_offset": 2048, 00:19:07.847 "data_size": 63488 00:19:07.847 }, 00:19:07.847 { 00:19:07.847 "name": "BaseBdev3", 00:19:07.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.847 "is_configured": false, 00:19:07.847 "data_offset": 0, 00:19:07.847 "data_size": 0 00:19:07.847 } 00:19:07.847 ] 00:19:07.847 }' 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.847 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.414 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.414 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.414 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.414 [2024-12-05 19:39:01.679225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.414 [2024-12-05 19:39:01.679517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:08.414 [2024-12-05 19:39:01.679543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:08.415 [2024-12-05 19:39:01.679931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:08.415 BaseBdev3 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 [2024-12-05 19:39:01.685244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:08.415 [2024-12-05 19:39:01.685267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:08.415 [2024-12-05 19:39:01.685448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 [ 00:19:08.415 { 00:19:08.415 "name": "BaseBdev3", 00:19:08.415 "aliases": [ 00:19:08.415 "60c9825a-8f47-4fea-a552-efa69ea0f76b" 00:19:08.415 ], 00:19:08.415 "product_name": "Malloc disk", 00:19:08.415 "block_size": 512, 00:19:08.415 "num_blocks": 65536, 00:19:08.415 "uuid": "60c9825a-8f47-4fea-a552-efa69ea0f76b", 00:19:08.415 "assigned_rate_limits": { 00:19:08.415 "rw_ios_per_sec": 0, 00:19:08.415 "rw_mbytes_per_sec": 0, 00:19:08.415 "r_mbytes_per_sec": 0, 00:19:08.415 "w_mbytes_per_sec": 0 00:19:08.415 }, 00:19:08.415 "claimed": true, 00:19:08.415 "claim_type": "exclusive_write", 00:19:08.415 "zoned": false, 00:19:08.415 "supported_io_types": { 00:19:08.415 "read": true, 00:19:08.415 "write": true, 00:19:08.415 "unmap": true, 00:19:08.415 "flush": true, 00:19:08.415 "reset": true, 00:19:08.415 "nvme_admin": false, 00:19:08.415 "nvme_io": false, 00:19:08.415 "nvme_io_md": false, 00:19:08.415 "write_zeroes": true, 00:19:08.415 "zcopy": true, 00:19:08.415 "get_zone_info": false, 00:19:08.415 "zone_management": false, 00:19:08.415 "zone_append": false, 00:19:08.415 "compare": false, 00:19:08.415 "compare_and_write": false, 00:19:08.415 "abort": true, 00:19:08.415 "seek_hole": false, 00:19:08.415 "seek_data": false, 00:19:08.415 "copy": true, 00:19:08.415 "nvme_iov_md": false 00:19:08.415 }, 00:19:08.415 "memory_domains": [ 00:19:08.415 { 00:19:08.415 "dma_device_id": "system", 00:19:08.415 "dma_device_type": 1 00:19:08.415 }, 00:19:08.415 { 00:19:08.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.415 "dma_device_type": 2 00:19:08.415 } 00:19:08.415 ], 00:19:08.415 "driver_specific": {} 00:19:08.415 } 00:19:08.415 ] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.415 "name": "Existed_Raid", 00:19:08.415 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:08.415 "strip_size_kb": 64, 00:19:08.415 "state": "online", 00:19:08.415 "raid_level": "raid5f", 00:19:08.415 "superblock": true, 00:19:08.415 "num_base_bdevs": 3, 00:19:08.415 "num_base_bdevs_discovered": 3, 00:19:08.415 "num_base_bdevs_operational": 3, 00:19:08.415 "base_bdevs_list": [ 00:19:08.415 { 00:19:08.415 "name": "BaseBdev1", 00:19:08.415 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:08.415 "is_configured": true, 00:19:08.415 "data_offset": 2048, 00:19:08.415 "data_size": 63488 00:19:08.415 }, 00:19:08.415 { 00:19:08.415 "name": "BaseBdev2", 00:19:08.415 "uuid": "78a56a62-b4f6-4548-9614-e7ebddc99521", 00:19:08.415 "is_configured": true, 00:19:08.415 "data_offset": 2048, 00:19:08.415 "data_size": 63488 00:19:08.415 }, 00:19:08.415 { 00:19:08.415 "name": "BaseBdev3", 00:19:08.415 "uuid": "60c9825a-8f47-4fea-a552-efa69ea0f76b", 00:19:08.415 "is_configured": true, 00:19:08.415 "data_offset": 2048, 00:19:08.415 "data_size": 63488 00:19:08.415 } 00:19:08.415 ] 00:19:08.415 }' 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.415 19:39:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.983 [2024-12-05 19:39:02.251429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.983 "name": "Existed_Raid", 00:19:08.983 "aliases": [ 00:19:08.983 "28305a66-fc9c-484e-b4d2-35616f8923e2" 00:19:08.983 ], 00:19:08.983 "product_name": "Raid Volume", 00:19:08.983 "block_size": 512, 00:19:08.983 "num_blocks": 126976, 00:19:08.983 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:08.983 "assigned_rate_limits": { 00:19:08.983 "rw_ios_per_sec": 0, 00:19:08.983 "rw_mbytes_per_sec": 0, 00:19:08.983 "r_mbytes_per_sec": 0, 00:19:08.983 "w_mbytes_per_sec": 0 00:19:08.983 }, 00:19:08.983 "claimed": false, 00:19:08.983 "zoned": false, 00:19:08.983 "supported_io_types": { 00:19:08.983 "read": true, 00:19:08.983 "write": true, 00:19:08.983 "unmap": false, 00:19:08.983 "flush": false, 00:19:08.983 "reset": true, 00:19:08.983 "nvme_admin": false, 00:19:08.983 "nvme_io": false, 00:19:08.983 "nvme_io_md": false, 00:19:08.983 "write_zeroes": true, 00:19:08.983 "zcopy": false, 00:19:08.983 "get_zone_info": false, 00:19:08.983 "zone_management": false, 00:19:08.983 "zone_append": false, 00:19:08.983 "compare": false, 00:19:08.983 "compare_and_write": false, 00:19:08.983 "abort": false, 00:19:08.983 "seek_hole": false, 00:19:08.983 "seek_data": false, 00:19:08.983 "copy": false, 00:19:08.983 "nvme_iov_md": false 00:19:08.983 }, 00:19:08.983 "driver_specific": { 00:19:08.983 "raid": { 00:19:08.983 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:08.983 "strip_size_kb": 64, 00:19:08.983 "state": "online", 00:19:08.983 "raid_level": "raid5f", 00:19:08.983 "superblock": true, 00:19:08.983 "num_base_bdevs": 3, 00:19:08.983 "num_base_bdevs_discovered": 3, 00:19:08.983 "num_base_bdevs_operational": 3, 00:19:08.983 "base_bdevs_list": [ 00:19:08.983 { 00:19:08.983 "name": "BaseBdev1", 00:19:08.983 "uuid": "39b3fffc-e5ba-43f6-ba62-38929f4c6809", 00:19:08.983 "is_configured": true, 00:19:08.983 "data_offset": 2048, 00:19:08.983 "data_size": 63488 00:19:08.983 }, 00:19:08.983 { 00:19:08.983 "name": "BaseBdev2", 00:19:08.983 "uuid": "78a56a62-b4f6-4548-9614-e7ebddc99521", 00:19:08.983 "is_configured": true, 00:19:08.983 "data_offset": 2048, 00:19:08.983 "data_size": 63488 00:19:08.983 }, 00:19:08.983 { 00:19:08.983 "name": "BaseBdev3", 00:19:08.983 "uuid": "60c9825a-8f47-4fea-a552-efa69ea0f76b", 00:19:08.983 "is_configured": true, 00:19:08.983 "data_offset": 2048, 00:19:08.983 "data_size": 63488 00:19:08.983 } 00:19:08.983 ] 00:19:08.983 } 00:19:08.983 } 00:19:08.983 }' 00:19:08.983 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:08.984 BaseBdev2 00:19:08.984 BaseBdev3' 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.984 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.243 [2024-12-05 19:39:02.567281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.243 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.502 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.502 "name": "Existed_Raid", 00:19:09.502 "uuid": "28305a66-fc9c-484e-b4d2-35616f8923e2", 00:19:09.502 "strip_size_kb": 64, 00:19:09.502 "state": "online", 00:19:09.502 "raid_level": "raid5f", 00:19:09.502 "superblock": true, 00:19:09.502 "num_base_bdevs": 3, 00:19:09.502 "num_base_bdevs_discovered": 2, 00:19:09.502 "num_base_bdevs_operational": 2, 00:19:09.502 "base_bdevs_list": [ 00:19:09.502 { 00:19:09.502 "name": null, 00:19:09.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.502 "is_configured": false, 00:19:09.502 "data_offset": 0, 00:19:09.502 "data_size": 63488 00:19:09.502 }, 00:19:09.502 { 00:19:09.502 "name": "BaseBdev2", 00:19:09.502 "uuid": "78a56a62-b4f6-4548-9614-e7ebddc99521", 00:19:09.502 "is_configured": true, 00:19:09.502 "data_offset": 2048, 00:19:09.502 "data_size": 63488 00:19:09.502 }, 00:19:09.502 { 00:19:09.502 "name": "BaseBdev3", 00:19:09.502 "uuid": "60c9825a-8f47-4fea-a552-efa69ea0f76b", 00:19:09.502 "is_configured": true, 00:19:09.502 "data_offset": 2048, 00:19:09.502 "data_size": 63488 00:19:09.502 } 00:19:09.502 ] 00:19:09.502 }' 00:19:09.502 19:39:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.502 19:39:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.761 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:09.761 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.761 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.761 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.761 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.762 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.762 [2024-12-05 19:39:03.201857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.762 [2024-12-05 19:39:03.202069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.021 [2024-12-05 19:39:03.290220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.021 [2024-12-05 19:39:03.350252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.021 [2024-12-05 19:39:03.350334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.021 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.280 BaseBdev2 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.280 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.280 [ 00:19:10.280 { 00:19:10.280 "name": "BaseBdev2", 00:19:10.280 "aliases": [ 00:19:10.280 "02f30591-20d5-49cc-868b-8e5c36936967" 00:19:10.280 ], 00:19:10.281 "product_name": "Malloc disk", 00:19:10.281 "block_size": 512, 00:19:10.281 "num_blocks": 65536, 00:19:10.281 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:10.281 "assigned_rate_limits": { 00:19:10.281 "rw_ios_per_sec": 0, 00:19:10.281 "rw_mbytes_per_sec": 0, 00:19:10.281 "r_mbytes_per_sec": 0, 00:19:10.281 "w_mbytes_per_sec": 0 00:19:10.281 }, 00:19:10.281 "claimed": false, 00:19:10.281 "zoned": false, 00:19:10.281 "supported_io_types": { 00:19:10.281 "read": true, 00:19:10.281 "write": true, 00:19:10.281 "unmap": true, 00:19:10.281 "flush": true, 00:19:10.281 "reset": true, 00:19:10.281 "nvme_admin": false, 00:19:10.281 "nvme_io": false, 00:19:10.281 "nvme_io_md": false, 00:19:10.281 "write_zeroes": true, 00:19:10.281 "zcopy": true, 00:19:10.281 "get_zone_info": false, 00:19:10.281 "zone_management": false, 00:19:10.281 "zone_append": false, 00:19:10.281 "compare": false, 00:19:10.281 "compare_and_write": false, 00:19:10.281 "abort": true, 00:19:10.281 "seek_hole": false, 00:19:10.281 "seek_data": false, 00:19:10.281 "copy": true, 00:19:10.281 "nvme_iov_md": false 00:19:10.281 }, 00:19:10.281 "memory_domains": [ 00:19:10.281 { 00:19:10.281 "dma_device_id": "system", 00:19:10.281 "dma_device_type": 1 00:19:10.281 }, 00:19:10.281 { 00:19:10.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.281 "dma_device_type": 2 00:19:10.281 } 00:19:10.281 ], 00:19:10.281 "driver_specific": {} 00:19:10.281 } 00:19:10.281 ] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.281 BaseBdev3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.281 [ 00:19:10.281 { 00:19:10.281 "name": "BaseBdev3", 00:19:10.281 "aliases": [ 00:19:10.281 "d6941def-58d8-43ff-8bf7-71a343e8b552" 00:19:10.281 ], 00:19:10.281 "product_name": "Malloc disk", 00:19:10.281 "block_size": 512, 00:19:10.281 "num_blocks": 65536, 00:19:10.281 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:10.281 "assigned_rate_limits": { 00:19:10.281 "rw_ios_per_sec": 0, 00:19:10.281 "rw_mbytes_per_sec": 0, 00:19:10.281 "r_mbytes_per_sec": 0, 00:19:10.281 "w_mbytes_per_sec": 0 00:19:10.281 }, 00:19:10.281 "claimed": false, 00:19:10.281 "zoned": false, 00:19:10.281 "supported_io_types": { 00:19:10.281 "read": true, 00:19:10.281 "write": true, 00:19:10.281 "unmap": true, 00:19:10.281 "flush": true, 00:19:10.281 "reset": true, 00:19:10.281 "nvme_admin": false, 00:19:10.281 "nvme_io": false, 00:19:10.281 "nvme_io_md": false, 00:19:10.281 "write_zeroes": true, 00:19:10.281 "zcopy": true, 00:19:10.281 "get_zone_info": false, 00:19:10.281 "zone_management": false, 00:19:10.281 "zone_append": false, 00:19:10.281 "compare": false, 00:19:10.281 "compare_and_write": false, 00:19:10.281 "abort": true, 00:19:10.281 "seek_hole": false, 00:19:10.281 "seek_data": false, 00:19:10.281 "copy": true, 00:19:10.281 "nvme_iov_md": false 00:19:10.281 }, 00:19:10.281 "memory_domains": [ 00:19:10.281 { 00:19:10.281 "dma_device_id": "system", 00:19:10.281 "dma_device_type": 1 00:19:10.281 }, 00:19:10.281 { 00:19:10.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.281 "dma_device_type": 2 00:19:10.281 } 00:19:10.281 ], 00:19:10.281 "driver_specific": {} 00:19:10.281 } 00:19:10.281 ] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.281 [2024-12-05 19:39:03.652858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.281 [2024-12-05 19:39:03.652911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.281 [2024-12-05 19:39:03.652950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:10.281 [2024-12-05 19:39:03.655407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.281 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.281 "name": "Existed_Raid", 00:19:10.281 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:10.281 "strip_size_kb": 64, 00:19:10.281 "state": "configuring", 00:19:10.281 "raid_level": "raid5f", 00:19:10.281 "superblock": true, 00:19:10.281 "num_base_bdevs": 3, 00:19:10.281 "num_base_bdevs_discovered": 2, 00:19:10.281 "num_base_bdevs_operational": 3, 00:19:10.281 "base_bdevs_list": [ 00:19:10.281 { 00:19:10.281 "name": "BaseBdev1", 00:19:10.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.281 "is_configured": false, 00:19:10.281 "data_offset": 0, 00:19:10.281 "data_size": 0 00:19:10.281 }, 00:19:10.281 { 00:19:10.281 "name": "BaseBdev2", 00:19:10.281 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:10.281 "is_configured": true, 00:19:10.281 "data_offset": 2048, 00:19:10.281 "data_size": 63488 00:19:10.281 }, 00:19:10.281 { 00:19:10.281 "name": "BaseBdev3", 00:19:10.281 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:10.281 "is_configured": true, 00:19:10.281 "data_offset": 2048, 00:19:10.281 "data_size": 63488 00:19:10.281 } 00:19:10.281 ] 00:19:10.282 }' 00:19:10.282 19:39:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.282 19:39:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.849 [2024-12-05 19:39:04.189164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.849 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.850 "name": "Existed_Raid", 00:19:10.850 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:10.850 "strip_size_kb": 64, 00:19:10.850 "state": "configuring", 00:19:10.850 "raid_level": "raid5f", 00:19:10.850 "superblock": true, 00:19:10.850 "num_base_bdevs": 3, 00:19:10.850 "num_base_bdevs_discovered": 1, 00:19:10.850 "num_base_bdevs_operational": 3, 00:19:10.850 "base_bdevs_list": [ 00:19:10.850 { 00:19:10.850 "name": "BaseBdev1", 00:19:10.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.850 "is_configured": false, 00:19:10.850 "data_offset": 0, 00:19:10.850 "data_size": 0 00:19:10.850 }, 00:19:10.850 { 00:19:10.850 "name": null, 00:19:10.850 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:10.850 "is_configured": false, 00:19:10.850 "data_offset": 0, 00:19:10.850 "data_size": 63488 00:19:10.850 }, 00:19:10.850 { 00:19:10.850 "name": "BaseBdev3", 00:19:10.850 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:10.850 "is_configured": true, 00:19:10.850 "data_offset": 2048, 00:19:10.850 "data_size": 63488 00:19:10.850 } 00:19:10.850 ] 00:19:10.850 }' 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.850 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.418 [2024-12-05 19:39:04.829029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.418 BaseBdev1 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.418 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.419 [ 00:19:11.419 { 00:19:11.419 "name": "BaseBdev1", 00:19:11.419 "aliases": [ 00:19:11.419 "cf70e737-2aa2-439f-a57c-36a04babd302" 00:19:11.419 ], 00:19:11.419 "product_name": "Malloc disk", 00:19:11.419 "block_size": 512, 00:19:11.419 "num_blocks": 65536, 00:19:11.419 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:11.419 "assigned_rate_limits": { 00:19:11.419 "rw_ios_per_sec": 0, 00:19:11.419 "rw_mbytes_per_sec": 0, 00:19:11.419 "r_mbytes_per_sec": 0, 00:19:11.419 "w_mbytes_per_sec": 0 00:19:11.419 }, 00:19:11.419 "claimed": true, 00:19:11.419 "claim_type": "exclusive_write", 00:19:11.419 "zoned": false, 00:19:11.419 "supported_io_types": { 00:19:11.419 "read": true, 00:19:11.419 "write": true, 00:19:11.419 "unmap": true, 00:19:11.419 "flush": true, 00:19:11.419 "reset": true, 00:19:11.419 "nvme_admin": false, 00:19:11.419 "nvme_io": false, 00:19:11.419 "nvme_io_md": false, 00:19:11.419 "write_zeroes": true, 00:19:11.419 "zcopy": true, 00:19:11.419 "get_zone_info": false, 00:19:11.419 "zone_management": false, 00:19:11.419 "zone_append": false, 00:19:11.419 "compare": false, 00:19:11.419 "compare_and_write": false, 00:19:11.419 "abort": true, 00:19:11.419 "seek_hole": false, 00:19:11.419 "seek_data": false, 00:19:11.419 "copy": true, 00:19:11.419 "nvme_iov_md": false 00:19:11.419 }, 00:19:11.419 "memory_domains": [ 00:19:11.419 { 00:19:11.419 "dma_device_id": "system", 00:19:11.419 "dma_device_type": 1 00:19:11.419 }, 00:19:11.419 { 00:19:11.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.419 "dma_device_type": 2 00:19:11.419 } 00:19:11.419 ], 00:19:11.678 "driver_specific": {} 00:19:11.678 } 00:19:11.678 ] 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.678 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.678 "name": "Existed_Raid", 00:19:11.678 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:11.678 "strip_size_kb": 64, 00:19:11.678 "state": "configuring", 00:19:11.678 "raid_level": "raid5f", 00:19:11.678 "superblock": true, 00:19:11.678 "num_base_bdevs": 3, 00:19:11.678 "num_base_bdevs_discovered": 2, 00:19:11.678 "num_base_bdevs_operational": 3, 00:19:11.678 "base_bdevs_list": [ 00:19:11.678 { 00:19:11.678 "name": "BaseBdev1", 00:19:11.678 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:11.678 "is_configured": true, 00:19:11.678 "data_offset": 2048, 00:19:11.678 "data_size": 63488 00:19:11.678 }, 00:19:11.678 { 00:19:11.678 "name": null, 00:19:11.678 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:11.678 "is_configured": false, 00:19:11.678 "data_offset": 0, 00:19:11.678 "data_size": 63488 00:19:11.678 }, 00:19:11.678 { 00:19:11.679 "name": "BaseBdev3", 00:19:11.679 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:11.679 "is_configured": true, 00:19:11.679 "data_offset": 2048, 00:19:11.679 "data_size": 63488 00:19:11.679 } 00:19:11.679 ] 00:19:11.679 }' 00:19:11.679 19:39:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.679 19:39:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.244 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.244 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.244 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.245 [2024-12-05 19:39:05.465326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.245 "name": "Existed_Raid", 00:19:12.245 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:12.245 "strip_size_kb": 64, 00:19:12.245 "state": "configuring", 00:19:12.245 "raid_level": "raid5f", 00:19:12.245 "superblock": true, 00:19:12.245 "num_base_bdevs": 3, 00:19:12.245 "num_base_bdevs_discovered": 1, 00:19:12.245 "num_base_bdevs_operational": 3, 00:19:12.245 "base_bdevs_list": [ 00:19:12.245 { 00:19:12.245 "name": "BaseBdev1", 00:19:12.245 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:12.245 "is_configured": true, 00:19:12.245 "data_offset": 2048, 00:19:12.245 "data_size": 63488 00:19:12.245 }, 00:19:12.245 { 00:19:12.245 "name": null, 00:19:12.245 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:12.245 "is_configured": false, 00:19:12.245 "data_offset": 0, 00:19:12.245 "data_size": 63488 00:19:12.245 }, 00:19:12.245 { 00:19:12.245 "name": null, 00:19:12.245 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:12.245 "is_configured": false, 00:19:12.245 "data_offset": 0, 00:19:12.245 "data_size": 63488 00:19:12.245 } 00:19:12.245 ] 00:19:12.245 }' 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.245 19:39:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.829 [2024-12-05 19:39:06.089536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.829 "name": "Existed_Raid", 00:19:12.829 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:12.829 "strip_size_kb": 64, 00:19:12.829 "state": "configuring", 00:19:12.829 "raid_level": "raid5f", 00:19:12.829 "superblock": true, 00:19:12.829 "num_base_bdevs": 3, 00:19:12.829 "num_base_bdevs_discovered": 2, 00:19:12.829 "num_base_bdevs_operational": 3, 00:19:12.829 "base_bdevs_list": [ 00:19:12.829 { 00:19:12.829 "name": "BaseBdev1", 00:19:12.829 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:12.829 "is_configured": true, 00:19:12.829 "data_offset": 2048, 00:19:12.829 "data_size": 63488 00:19:12.829 }, 00:19:12.829 { 00:19:12.829 "name": null, 00:19:12.829 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:12.829 "is_configured": false, 00:19:12.829 "data_offset": 0, 00:19:12.829 "data_size": 63488 00:19:12.829 }, 00:19:12.829 { 00:19:12.829 "name": "BaseBdev3", 00:19:12.829 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:12.829 "is_configured": true, 00:19:12.829 "data_offset": 2048, 00:19:12.829 "data_size": 63488 00:19:12.829 } 00:19:12.829 ] 00:19:12.829 }' 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.829 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.397 [2024-12-05 19:39:06.673796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.397 "name": "Existed_Raid", 00:19:13.397 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:13.397 "strip_size_kb": 64, 00:19:13.397 "state": "configuring", 00:19:13.397 "raid_level": "raid5f", 00:19:13.397 "superblock": true, 00:19:13.397 "num_base_bdevs": 3, 00:19:13.397 "num_base_bdevs_discovered": 1, 00:19:13.397 "num_base_bdevs_operational": 3, 00:19:13.397 "base_bdevs_list": [ 00:19:13.397 { 00:19:13.397 "name": null, 00:19:13.397 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:13.397 "is_configured": false, 00:19:13.397 "data_offset": 0, 00:19:13.397 "data_size": 63488 00:19:13.397 }, 00:19:13.397 { 00:19:13.397 "name": null, 00:19:13.397 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:13.397 "is_configured": false, 00:19:13.397 "data_offset": 0, 00:19:13.397 "data_size": 63488 00:19:13.397 }, 00:19:13.397 { 00:19:13.397 "name": "BaseBdev3", 00:19:13.397 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:13.397 "is_configured": true, 00:19:13.397 "data_offset": 2048, 00:19:13.397 "data_size": 63488 00:19:13.397 } 00:19:13.397 ] 00:19:13.397 }' 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.397 19:39:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.964 [2024-12-05 19:39:07.335551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.964 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.965 "name": "Existed_Raid", 00:19:13.965 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:13.965 "strip_size_kb": 64, 00:19:13.965 "state": "configuring", 00:19:13.965 "raid_level": "raid5f", 00:19:13.965 "superblock": true, 00:19:13.965 "num_base_bdevs": 3, 00:19:13.965 "num_base_bdevs_discovered": 2, 00:19:13.965 "num_base_bdevs_operational": 3, 00:19:13.965 "base_bdevs_list": [ 00:19:13.965 { 00:19:13.965 "name": null, 00:19:13.965 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:13.965 "is_configured": false, 00:19:13.965 "data_offset": 0, 00:19:13.965 "data_size": 63488 00:19:13.965 }, 00:19:13.965 { 00:19:13.965 "name": "BaseBdev2", 00:19:13.965 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:13.965 "is_configured": true, 00:19:13.965 "data_offset": 2048, 00:19:13.965 "data_size": 63488 00:19:13.965 }, 00:19:13.965 { 00:19:13.965 "name": "BaseBdev3", 00:19:13.965 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:13.965 "is_configured": true, 00:19:13.965 "data_offset": 2048, 00:19:13.965 "data_size": 63488 00:19:13.965 } 00:19:13.965 ] 00:19:13.965 }' 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.965 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.531 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:14.532 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:14.532 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.532 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.532 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.532 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 19:39:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf70e737-2aa2-439f-a57c-36a04babd302 00:19:14.790 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 19:39:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 [2024-12-05 19:39:08.026019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:14.790 NewBaseBdev 00:19:14.790 [2024-12-05 19:39:08.026685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:14.790 [2024-12-05 19:39:08.026748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:14.790 [2024-12-05 19:39:08.027057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 [2024-12-05 19:39:08.032888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:14.790 [2024-12-05 19:39:08.033067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:14.790 [2024-12-05 19:39:08.033616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 [ 00:19:14.790 { 00:19:14.790 "name": "NewBaseBdev", 00:19:14.790 "aliases": [ 00:19:14.790 "cf70e737-2aa2-439f-a57c-36a04babd302" 00:19:14.790 ], 00:19:14.790 "product_name": "Malloc disk", 00:19:14.790 "block_size": 512, 00:19:14.790 "num_blocks": 65536, 00:19:14.790 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:14.790 "assigned_rate_limits": { 00:19:14.790 "rw_ios_per_sec": 0, 00:19:14.790 "rw_mbytes_per_sec": 0, 00:19:14.791 "r_mbytes_per_sec": 0, 00:19:14.791 "w_mbytes_per_sec": 0 00:19:14.791 }, 00:19:14.791 "claimed": true, 00:19:14.791 "claim_type": "exclusive_write", 00:19:14.791 "zoned": false, 00:19:14.791 "supported_io_types": { 00:19:14.791 "read": true, 00:19:14.791 "write": true, 00:19:14.791 "unmap": true, 00:19:14.791 "flush": true, 00:19:14.791 "reset": true, 00:19:14.791 "nvme_admin": false, 00:19:14.791 "nvme_io": false, 00:19:14.791 "nvme_io_md": false, 00:19:14.791 "write_zeroes": true, 00:19:14.791 "zcopy": true, 00:19:14.791 "get_zone_info": false, 00:19:14.791 "zone_management": false, 00:19:14.791 "zone_append": false, 00:19:14.791 "compare": false, 00:19:14.791 "compare_and_write": false, 00:19:14.791 "abort": true, 00:19:14.791 "seek_hole": false, 00:19:14.791 "seek_data": false, 00:19:14.791 "copy": true, 00:19:14.791 "nvme_iov_md": false 00:19:14.791 }, 00:19:14.791 "memory_domains": [ 00:19:14.791 { 00:19:14.791 "dma_device_id": "system", 00:19:14.791 "dma_device_type": 1 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.791 "dma_device_type": 2 00:19:14.791 } 00:19:14.791 ], 00:19:14.791 "driver_specific": {} 00:19:14.791 } 00:19:14.791 ] 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.791 "name": "Existed_Raid", 00:19:14.791 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:14.791 "strip_size_kb": 64, 00:19:14.791 "state": "online", 00:19:14.791 "raid_level": "raid5f", 00:19:14.791 "superblock": true, 00:19:14.791 "num_base_bdevs": 3, 00:19:14.791 "num_base_bdevs_discovered": 3, 00:19:14.791 "num_base_bdevs_operational": 3, 00:19:14.791 "base_bdevs_list": [ 00:19:14.791 { 00:19:14.791 "name": "NewBaseBdev", 00:19:14.791 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "name": "BaseBdev2", 00:19:14.791 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 }, 00:19:14.791 { 00:19:14.791 "name": "BaseBdev3", 00:19:14.791 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:14.791 "is_configured": true, 00:19:14.791 "data_offset": 2048, 00:19:14.791 "data_size": 63488 00:19:14.791 } 00:19:14.791 ] 00:19:14.791 }' 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.791 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.358 [2024-12-05 19:39:08.612580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.358 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.358 "name": "Existed_Raid", 00:19:15.358 "aliases": [ 00:19:15.358 "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9" 00:19:15.358 ], 00:19:15.358 "product_name": "Raid Volume", 00:19:15.358 "block_size": 512, 00:19:15.358 "num_blocks": 126976, 00:19:15.358 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:15.358 "assigned_rate_limits": { 00:19:15.358 "rw_ios_per_sec": 0, 00:19:15.358 "rw_mbytes_per_sec": 0, 00:19:15.358 "r_mbytes_per_sec": 0, 00:19:15.358 "w_mbytes_per_sec": 0 00:19:15.358 }, 00:19:15.358 "claimed": false, 00:19:15.358 "zoned": false, 00:19:15.358 "supported_io_types": { 00:19:15.358 "read": true, 00:19:15.358 "write": true, 00:19:15.358 "unmap": false, 00:19:15.358 "flush": false, 00:19:15.358 "reset": true, 00:19:15.358 "nvme_admin": false, 00:19:15.358 "nvme_io": false, 00:19:15.358 "nvme_io_md": false, 00:19:15.358 "write_zeroes": true, 00:19:15.358 "zcopy": false, 00:19:15.358 "get_zone_info": false, 00:19:15.358 "zone_management": false, 00:19:15.358 "zone_append": false, 00:19:15.358 "compare": false, 00:19:15.358 "compare_and_write": false, 00:19:15.359 "abort": false, 00:19:15.359 "seek_hole": false, 00:19:15.359 "seek_data": false, 00:19:15.359 "copy": false, 00:19:15.359 "nvme_iov_md": false 00:19:15.359 }, 00:19:15.359 "driver_specific": { 00:19:15.359 "raid": { 00:19:15.359 "uuid": "c67fe4c0-42b8-4ce8-ae14-e74fe68b84f9", 00:19:15.359 "strip_size_kb": 64, 00:19:15.359 "state": "online", 00:19:15.359 "raid_level": "raid5f", 00:19:15.359 "superblock": true, 00:19:15.359 "num_base_bdevs": 3, 00:19:15.359 "num_base_bdevs_discovered": 3, 00:19:15.359 "num_base_bdevs_operational": 3, 00:19:15.359 "base_bdevs_list": [ 00:19:15.359 { 00:19:15.359 "name": "NewBaseBdev", 00:19:15.359 "uuid": "cf70e737-2aa2-439f-a57c-36a04babd302", 00:19:15.359 "is_configured": true, 00:19:15.359 "data_offset": 2048, 00:19:15.359 "data_size": 63488 00:19:15.359 }, 00:19:15.359 { 00:19:15.359 "name": "BaseBdev2", 00:19:15.359 "uuid": "02f30591-20d5-49cc-868b-8e5c36936967", 00:19:15.359 "is_configured": true, 00:19:15.359 "data_offset": 2048, 00:19:15.359 "data_size": 63488 00:19:15.359 }, 00:19:15.359 { 00:19:15.359 "name": "BaseBdev3", 00:19:15.359 "uuid": "d6941def-58d8-43ff-8bf7-71a343e8b552", 00:19:15.359 "is_configured": true, 00:19:15.359 "data_offset": 2048, 00:19:15.359 "data_size": 63488 00:19:15.359 } 00:19:15.359 ] 00:19:15.359 } 00:19:15.359 } 00:19:15.359 }' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:15.359 BaseBdev2 00:19:15.359 BaseBdev3' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.359 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.618 [2024-12-05 19:39:08.956411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.618 [2024-12-05 19:39:08.956448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.618 [2024-12-05 19:39:08.956561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.618 [2024-12-05 19:39:08.957007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.618 [2024-12-05 19:39:08.957039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80870 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80870 ']' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80870 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80870 00:19:15.618 killing process with pid 80870 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80870' 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80870 00:19:15.618 [2024-12-05 19:39:08.997457] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.618 19:39:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80870 00:19:15.877 [2024-12-05 19:39:09.258802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.252 19:39:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:17.252 00:19:17.252 real 0m12.063s 00:19:17.252 user 0m19.989s 00:19:17.252 sys 0m1.748s 00:19:17.252 19:39:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.252 19:39:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 ************************************ 00:19:17.252 END TEST raid5f_state_function_test_sb 00:19:17.252 ************************************ 00:19:17.252 19:39:10 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:19:17.252 19:39:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:17.252 19:39:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.252 19:39:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 ************************************ 00:19:17.252 START TEST raid5f_superblock_test 00:19:17.252 ************************************ 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81503 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81503 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81503 ']' 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.252 19:39:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.252 [2024-12-05 19:39:10.502685] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:19:17.252 [2024-12-05 19:39:10.502900] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81503 ] 00:19:17.252 [2024-12-05 19:39:10.690084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.511 [2024-12-05 19:39:10.821585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.769 [2024-12-05 19:39:11.030331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.769 [2024-12-05 19:39:11.030378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.029 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 malloc1 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 [2024-12-05 19:39:11.499377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.366 [2024-12-05 19:39:11.499493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.366 [2024-12-05 19:39:11.499525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:18.366 [2024-12-05 19:39:11.499540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.366 [2024-12-05 19:39:11.502476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.366 [2024-12-05 19:39:11.502551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.366 pt1 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 malloc2 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 [2024-12-05 19:39:11.554594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.366 [2024-12-05 19:39:11.554696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.366 [2024-12-05 19:39:11.554770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:18.366 [2024-12-05 19:39:11.554787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.366 [2024-12-05 19:39:11.557660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.366 [2024-12-05 19:39:11.557773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.366 pt2 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 malloc3 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 [2024-12-05 19:39:11.620898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:18.366 [2024-12-05 19:39:11.621001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.366 [2024-12-05 19:39:11.621034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.366 [2024-12-05 19:39:11.621051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.366 [2024-12-05 19:39:11.624248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.366 [2024-12-05 19:39:11.624291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:18.366 pt3 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.366 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.366 [2024-12-05 19:39:11.629015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.367 [2024-12-05 19:39:11.631521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:18.367 [2024-12-05 19:39:11.631647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:18.367 [2024-12-05 19:39:11.631934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:18.367 [2024-12-05 19:39:11.631975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:18.367 [2024-12-05 19:39:11.632298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.367 [2024-12-05 19:39:11.637661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:18.367 [2024-12-05 19:39:11.637691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:18.367 [2024-12-05 19:39:11.638007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.367 "name": "raid_bdev1", 00:19:18.367 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:18.367 "strip_size_kb": 64, 00:19:18.367 "state": "online", 00:19:18.367 "raid_level": "raid5f", 00:19:18.367 "superblock": true, 00:19:18.367 "num_base_bdevs": 3, 00:19:18.367 "num_base_bdevs_discovered": 3, 00:19:18.367 "num_base_bdevs_operational": 3, 00:19:18.367 "base_bdevs_list": [ 00:19:18.367 { 00:19:18.367 "name": "pt1", 00:19:18.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.367 "is_configured": true, 00:19:18.367 "data_offset": 2048, 00:19:18.367 "data_size": 63488 00:19:18.367 }, 00:19:18.367 { 00:19:18.367 "name": "pt2", 00:19:18.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.367 "is_configured": true, 00:19:18.367 "data_offset": 2048, 00:19:18.367 "data_size": 63488 00:19:18.367 }, 00:19:18.367 { 00:19:18.367 "name": "pt3", 00:19:18.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:18.367 "is_configured": true, 00:19:18.367 "data_offset": 2048, 00:19:18.367 "data_size": 63488 00:19:18.367 } 00:19:18.367 ] 00:19:18.367 }' 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.367 19:39:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:18.935 [2024-12-05 19:39:12.188497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.935 "name": "raid_bdev1", 00:19:18.935 "aliases": [ 00:19:18.935 "d991dead-17e6-44d4-99b1-79f0dd473b04" 00:19:18.935 ], 00:19:18.935 "product_name": "Raid Volume", 00:19:18.935 "block_size": 512, 00:19:18.935 "num_blocks": 126976, 00:19:18.935 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:18.935 "assigned_rate_limits": { 00:19:18.935 "rw_ios_per_sec": 0, 00:19:18.935 "rw_mbytes_per_sec": 0, 00:19:18.935 "r_mbytes_per_sec": 0, 00:19:18.935 "w_mbytes_per_sec": 0 00:19:18.935 }, 00:19:18.935 "claimed": false, 00:19:18.935 "zoned": false, 00:19:18.935 "supported_io_types": { 00:19:18.935 "read": true, 00:19:18.935 "write": true, 00:19:18.935 "unmap": false, 00:19:18.935 "flush": false, 00:19:18.935 "reset": true, 00:19:18.935 "nvme_admin": false, 00:19:18.935 "nvme_io": false, 00:19:18.935 "nvme_io_md": false, 00:19:18.935 "write_zeroes": true, 00:19:18.935 "zcopy": false, 00:19:18.935 "get_zone_info": false, 00:19:18.935 "zone_management": false, 00:19:18.935 "zone_append": false, 00:19:18.935 "compare": false, 00:19:18.935 "compare_and_write": false, 00:19:18.935 "abort": false, 00:19:18.935 "seek_hole": false, 00:19:18.935 "seek_data": false, 00:19:18.935 "copy": false, 00:19:18.935 "nvme_iov_md": false 00:19:18.935 }, 00:19:18.935 "driver_specific": { 00:19:18.935 "raid": { 00:19:18.935 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:18.935 "strip_size_kb": 64, 00:19:18.935 "state": "online", 00:19:18.935 "raid_level": "raid5f", 00:19:18.935 "superblock": true, 00:19:18.935 "num_base_bdevs": 3, 00:19:18.935 "num_base_bdevs_discovered": 3, 00:19:18.935 "num_base_bdevs_operational": 3, 00:19:18.935 "base_bdevs_list": [ 00:19:18.935 { 00:19:18.935 "name": "pt1", 00:19:18.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.935 "is_configured": true, 00:19:18.935 "data_offset": 2048, 00:19:18.935 "data_size": 63488 00:19:18.935 }, 00:19:18.935 { 00:19:18.935 "name": "pt2", 00:19:18.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.935 "is_configured": true, 00:19:18.935 "data_offset": 2048, 00:19:18.935 "data_size": 63488 00:19:18.935 }, 00:19:18.935 { 00:19:18.935 "name": "pt3", 00:19:18.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:18.935 "is_configured": true, 00:19:18.935 "data_offset": 2048, 00:19:18.935 "data_size": 63488 00:19:18.935 } 00:19:18.935 ] 00:19:18.935 } 00:19:18.935 } 00:19:18.935 }' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:18.935 pt2 00:19:18.935 pt3' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.935 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:19.194 [2024-12-05 19:39:12.512441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d991dead-17e6-44d4-99b1-79f0dd473b04 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d991dead-17e6-44d4-99b1-79f0dd473b04 ']' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 [2024-12-05 19:39:12.564289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.194 [2024-12-05 19:39:12.564324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.194 [2024-12-05 19:39:12.564408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.194 [2024-12-05 19:39:12.564553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.194 [2024-12-05 19:39:12.564604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.194 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.453 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.453 [2024-12-05 19:39:12.708370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:19.453 [2024-12-05 19:39:12.711134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:19.453 [2024-12-05 19:39:12.711212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:19.453 [2024-12-05 19:39:12.711285] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:19.453 [2024-12-05 19:39:12.711369] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:19.453 [2024-12-05 19:39:12.711404] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:19.453 [2024-12-05 19:39:12.711431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.454 [2024-12-05 19:39:12.711445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:19.454 request: 00:19:19.454 { 00:19:19.454 "name": "raid_bdev1", 00:19:19.454 "raid_level": "raid5f", 00:19:19.454 "base_bdevs": [ 00:19:19.454 "malloc1", 00:19:19.454 "malloc2", 00:19:19.454 "malloc3" 00:19:19.454 ], 00:19:19.454 "strip_size_kb": 64, 00:19:19.454 "superblock": false, 00:19:19.454 "method": "bdev_raid_create", 00:19:19.454 "req_id": 1 00:19:19.454 } 00:19:19.454 Got JSON-RPC error response 00:19:19.454 response: 00:19:19.454 { 00:19:19.454 "code": -17, 00:19:19.454 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:19.454 } 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.454 [2024-12-05 19:39:12.768348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.454 [2024-12-05 19:39:12.768420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.454 [2024-12-05 19:39:12.768449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:19.454 [2024-12-05 19:39:12.768464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.454 [2024-12-05 19:39:12.771591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.454 [2024-12-05 19:39:12.771634] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.454 [2024-12-05 19:39:12.771781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:19.454 [2024-12-05 19:39:12.771860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.454 pt1 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.454 "name": "raid_bdev1", 00:19:19.454 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:19.454 "strip_size_kb": 64, 00:19:19.454 "state": "configuring", 00:19:19.454 "raid_level": "raid5f", 00:19:19.454 "superblock": true, 00:19:19.454 "num_base_bdevs": 3, 00:19:19.454 "num_base_bdevs_discovered": 1, 00:19:19.454 "num_base_bdevs_operational": 3, 00:19:19.454 "base_bdevs_list": [ 00:19:19.454 { 00:19:19.454 "name": "pt1", 00:19:19.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.454 "is_configured": true, 00:19:19.454 "data_offset": 2048, 00:19:19.454 "data_size": 63488 00:19:19.454 }, 00:19:19.454 { 00:19:19.454 "name": null, 00:19:19.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.454 "is_configured": false, 00:19:19.454 "data_offset": 2048, 00:19:19.454 "data_size": 63488 00:19:19.454 }, 00:19:19.454 { 00:19:19.454 "name": null, 00:19:19.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:19.454 "is_configured": false, 00:19:19.454 "data_offset": 2048, 00:19:19.454 "data_size": 63488 00:19:19.454 } 00:19:19.454 ] 00:19:19.454 }' 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.454 19:39:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.022 [2024-12-05 19:39:13.308539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.022 [2024-12-05 19:39:13.308613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.022 [2024-12-05 19:39:13.308645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:20.022 [2024-12-05 19:39:13.308658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.022 [2024-12-05 19:39:13.309316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.022 [2024-12-05 19:39:13.309357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.022 [2024-12-05 19:39:13.309460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.022 [2024-12-05 19:39:13.309497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.022 pt2 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.022 [2024-12-05 19:39:13.316495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.022 "name": "raid_bdev1", 00:19:20.022 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:20.022 "strip_size_kb": 64, 00:19:20.022 "state": "configuring", 00:19:20.022 "raid_level": "raid5f", 00:19:20.022 "superblock": true, 00:19:20.022 "num_base_bdevs": 3, 00:19:20.022 "num_base_bdevs_discovered": 1, 00:19:20.022 "num_base_bdevs_operational": 3, 00:19:20.022 "base_bdevs_list": [ 00:19:20.022 { 00:19:20.022 "name": "pt1", 00:19:20.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.022 "is_configured": true, 00:19:20.022 "data_offset": 2048, 00:19:20.022 "data_size": 63488 00:19:20.022 }, 00:19:20.022 { 00:19:20.022 "name": null, 00:19:20.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.022 "is_configured": false, 00:19:20.022 "data_offset": 0, 00:19:20.022 "data_size": 63488 00:19:20.022 }, 00:19:20.022 { 00:19:20.022 "name": null, 00:19:20.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.022 "is_configured": false, 00:19:20.022 "data_offset": 2048, 00:19:20.022 "data_size": 63488 00:19:20.022 } 00:19:20.022 ] 00:19:20.022 }' 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.022 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.590 [2024-12-05 19:39:13.856628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.590 [2024-12-05 19:39:13.856789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.590 [2024-12-05 19:39:13.856817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:20.590 [2024-12-05 19:39:13.856834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.590 [2024-12-05 19:39:13.857389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.590 [2024-12-05 19:39:13.857427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.590 [2024-12-05 19:39:13.857543] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.590 [2024-12-05 19:39:13.857590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.590 pt2 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.590 [2024-12-05 19:39:13.864585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:20.590 [2024-12-05 19:39:13.864651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.590 [2024-12-05 19:39:13.864671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:20.590 [2024-12-05 19:39:13.864685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.590 [2024-12-05 19:39:13.865090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.590 [2024-12-05 19:39:13.865144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:20.590 [2024-12-05 19:39:13.865212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:20.590 [2024-12-05 19:39:13.865246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:20.590 [2024-12-05 19:39:13.865388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.590 [2024-12-05 19:39:13.865417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:20.590 [2024-12-05 19:39:13.865704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:20.590 [2024-12-05 19:39:13.870222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.590 [2024-12-05 19:39:13.870247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:20.590 [2024-12-05 19:39:13.870459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.590 pt3 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.590 "name": "raid_bdev1", 00:19:20.590 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:20.590 "strip_size_kb": 64, 00:19:20.590 "state": "online", 00:19:20.590 "raid_level": "raid5f", 00:19:20.590 "superblock": true, 00:19:20.590 "num_base_bdevs": 3, 00:19:20.590 "num_base_bdevs_discovered": 3, 00:19:20.590 "num_base_bdevs_operational": 3, 00:19:20.590 "base_bdevs_list": [ 00:19:20.590 { 00:19:20.590 "name": "pt1", 00:19:20.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.590 "is_configured": true, 00:19:20.590 "data_offset": 2048, 00:19:20.590 "data_size": 63488 00:19:20.590 }, 00:19:20.590 { 00:19:20.590 "name": "pt2", 00:19:20.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.590 "is_configured": true, 00:19:20.590 "data_offset": 2048, 00:19:20.590 "data_size": 63488 00:19:20.590 }, 00:19:20.590 { 00:19:20.590 "name": "pt3", 00:19:20.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.590 "is_configured": true, 00:19:20.590 "data_offset": 2048, 00:19:20.590 "data_size": 63488 00:19:20.590 } 00:19:20.590 ] 00:19:20.590 }' 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.590 19:39:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.158 [2024-12-05 19:39:14.408174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.158 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.158 "name": "raid_bdev1", 00:19:21.158 "aliases": [ 00:19:21.158 "d991dead-17e6-44d4-99b1-79f0dd473b04" 00:19:21.158 ], 00:19:21.158 "product_name": "Raid Volume", 00:19:21.158 "block_size": 512, 00:19:21.158 "num_blocks": 126976, 00:19:21.158 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:21.158 "assigned_rate_limits": { 00:19:21.158 "rw_ios_per_sec": 0, 00:19:21.158 "rw_mbytes_per_sec": 0, 00:19:21.158 "r_mbytes_per_sec": 0, 00:19:21.158 "w_mbytes_per_sec": 0 00:19:21.158 }, 00:19:21.158 "claimed": false, 00:19:21.158 "zoned": false, 00:19:21.158 "supported_io_types": { 00:19:21.158 "read": true, 00:19:21.158 "write": true, 00:19:21.158 "unmap": false, 00:19:21.158 "flush": false, 00:19:21.158 "reset": true, 00:19:21.158 "nvme_admin": false, 00:19:21.158 "nvme_io": false, 00:19:21.158 "nvme_io_md": false, 00:19:21.158 "write_zeroes": true, 00:19:21.158 "zcopy": false, 00:19:21.158 "get_zone_info": false, 00:19:21.158 "zone_management": false, 00:19:21.158 "zone_append": false, 00:19:21.158 "compare": false, 00:19:21.158 "compare_and_write": false, 00:19:21.158 "abort": false, 00:19:21.158 "seek_hole": false, 00:19:21.158 "seek_data": false, 00:19:21.158 "copy": false, 00:19:21.158 "nvme_iov_md": false 00:19:21.158 }, 00:19:21.158 "driver_specific": { 00:19:21.158 "raid": { 00:19:21.158 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:21.158 "strip_size_kb": 64, 00:19:21.158 "state": "online", 00:19:21.158 "raid_level": "raid5f", 00:19:21.158 "superblock": true, 00:19:21.158 "num_base_bdevs": 3, 00:19:21.158 "num_base_bdevs_discovered": 3, 00:19:21.158 "num_base_bdevs_operational": 3, 00:19:21.158 "base_bdevs_list": [ 00:19:21.158 { 00:19:21.159 "name": "pt1", 00:19:21.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 }, 00:19:21.159 { 00:19:21.159 "name": "pt2", 00:19:21.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 }, 00:19:21.159 { 00:19:21.159 "name": "pt3", 00:19:21.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.159 "is_configured": true, 00:19:21.159 "data_offset": 2048, 00:19:21.159 "data_size": 63488 00:19:21.159 } 00:19:21.159 ] 00:19:21.159 } 00:19:21.159 } 00:19:21.159 }' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:21.159 pt2 00:19:21.159 pt3' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.159 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:21.418 [2024-12-05 19:39:14.736327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.418 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d991dead-17e6-44d4-99b1-79f0dd473b04 '!=' d991dead-17e6-44d4-99b1-79f0dd473b04 ']' 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.419 [2024-12-05 19:39:14.788214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.419 "name": "raid_bdev1", 00:19:21.419 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:21.419 "strip_size_kb": 64, 00:19:21.419 "state": "online", 00:19:21.419 "raid_level": "raid5f", 00:19:21.419 "superblock": true, 00:19:21.419 "num_base_bdevs": 3, 00:19:21.419 "num_base_bdevs_discovered": 2, 00:19:21.419 "num_base_bdevs_operational": 2, 00:19:21.419 "base_bdevs_list": [ 00:19:21.419 { 00:19:21.419 "name": null, 00:19:21.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.419 "is_configured": false, 00:19:21.419 "data_offset": 0, 00:19:21.419 "data_size": 63488 00:19:21.419 }, 00:19:21.419 { 00:19:21.419 "name": "pt2", 00:19:21.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.419 "is_configured": true, 00:19:21.419 "data_offset": 2048, 00:19:21.419 "data_size": 63488 00:19:21.419 }, 00:19:21.419 { 00:19:21.419 "name": "pt3", 00:19:21.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.419 "is_configured": true, 00:19:21.419 "data_offset": 2048, 00:19:21.419 "data_size": 63488 00:19:21.419 } 00:19:21.419 ] 00:19:21.419 }' 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.419 19:39:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 [2024-12-05 19:39:15.324450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.986 [2024-12-05 19:39:15.324722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.986 [2024-12-05 19:39:15.324853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.986 [2024-12-05 19:39:15.324934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.986 [2024-12-05 19:39:15.324957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.986 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.986 [2024-12-05 19:39:15.424405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.986 [2024-12-05 19:39:15.424675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.986 [2024-12-05 19:39:15.424735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:21.986 [2024-12-05 19:39:15.424757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.243 [2024-12-05 19:39:15.427925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.243 [2024-12-05 19:39:15.428083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:22.243 [2024-12-05 19:39:15.428304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:22.243 [2024-12-05 19:39:15.428469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.243 pt2 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.243 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.244 "name": "raid_bdev1", 00:19:22.244 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:22.244 "strip_size_kb": 64, 00:19:22.244 "state": "configuring", 00:19:22.244 "raid_level": "raid5f", 00:19:22.244 "superblock": true, 00:19:22.244 "num_base_bdevs": 3, 00:19:22.244 "num_base_bdevs_discovered": 1, 00:19:22.244 "num_base_bdevs_operational": 2, 00:19:22.244 "base_bdevs_list": [ 00:19:22.244 { 00:19:22.244 "name": null, 00:19:22.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.244 "is_configured": false, 00:19:22.244 "data_offset": 2048, 00:19:22.244 "data_size": 63488 00:19:22.244 }, 00:19:22.244 { 00:19:22.244 "name": "pt2", 00:19:22.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.244 "is_configured": true, 00:19:22.244 "data_offset": 2048, 00:19:22.244 "data_size": 63488 00:19:22.244 }, 00:19:22.244 { 00:19:22.244 "name": null, 00:19:22.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.244 "is_configured": false, 00:19:22.244 "data_offset": 2048, 00:19:22.244 "data_size": 63488 00:19:22.244 } 00:19:22.244 ] 00:19:22.244 }' 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.244 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 [2024-12-05 19:39:15.972970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:22.807 [2024-12-05 19:39:15.973290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.807 [2024-12-05 19:39:15.973362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:22.807 [2024-12-05 19:39:15.973554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.807 [2024-12-05 19:39:15.974284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.807 [2024-12-05 19:39:15.974339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:22.807 [2024-12-05 19:39:15.974437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:22.807 [2024-12-05 19:39:15.974475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:22.807 [2024-12-05 19:39:15.974616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:22.807 [2024-12-05 19:39:15.974636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:22.807 [2024-12-05 19:39:15.975002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:22.807 [2024-12-05 19:39:15.980045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:22.807 pt3 00:19:22.807 [2024-12-05 19:39:15.980253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:22.807 [2024-12-05 19:39:15.980598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 19:39:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.807 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.807 "name": "raid_bdev1", 00:19:22.807 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:22.807 "strip_size_kb": 64, 00:19:22.807 "state": "online", 00:19:22.807 "raid_level": "raid5f", 00:19:22.807 "superblock": true, 00:19:22.807 "num_base_bdevs": 3, 00:19:22.807 "num_base_bdevs_discovered": 2, 00:19:22.807 "num_base_bdevs_operational": 2, 00:19:22.807 "base_bdevs_list": [ 00:19:22.807 { 00:19:22.807 "name": null, 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.807 "is_configured": false, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 }, 00:19:22.807 { 00:19:22.807 "name": "pt2", 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.807 "is_configured": true, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 }, 00:19:22.807 { 00:19:22.807 "name": "pt3", 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.807 "is_configured": true, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 } 00:19:22.807 ] 00:19:22.807 }' 00:19:22.807 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.807 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 [2024-12-05 19:39:16.518691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.373 [2024-12-05 19:39:16.518772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.373 [2024-12-05 19:39:16.518867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.373 [2024-12-05 19:39:16.518947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.373 [2024-12-05 19:39:16.518962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 [2024-12-05 19:39:16.594783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.373 [2024-12-05 19:39:16.594893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.373 [2024-12-05 19:39:16.594922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:23.373 [2024-12-05 19:39:16.594936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.373 [2024-12-05 19:39:16.597973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.373 pt1 00:19:23.373 [2024-12-05 19:39:16.598269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.373 [2024-12-05 19:39:16.598418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:23.373 [2024-12-05 19:39:16.598503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.373 [2024-12-05 19:39:16.598707] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:23.373 [2024-12-05 19:39:16.598725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.373 [2024-12-05 19:39:16.598747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:23.373 [2024-12-05 19:39:16.598838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.373 "name": "raid_bdev1", 00:19:23.373 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:23.373 "strip_size_kb": 64, 00:19:23.373 "state": "configuring", 00:19:23.373 "raid_level": "raid5f", 00:19:23.373 "superblock": true, 00:19:23.373 "num_base_bdevs": 3, 00:19:23.373 "num_base_bdevs_discovered": 1, 00:19:23.373 "num_base_bdevs_operational": 2, 00:19:23.373 "base_bdevs_list": [ 00:19:23.373 { 00:19:23.373 "name": null, 00:19:23.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.373 "is_configured": false, 00:19:23.373 "data_offset": 2048, 00:19:23.373 "data_size": 63488 00:19:23.373 }, 00:19:23.373 { 00:19:23.373 "name": "pt2", 00:19:23.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.373 "is_configured": true, 00:19:23.373 "data_offset": 2048, 00:19:23.373 "data_size": 63488 00:19:23.373 }, 00:19:23.373 { 00:19:23.373 "name": null, 00:19:23.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.373 "is_configured": false, 00:19:23.373 "data_offset": 2048, 00:19:23.373 "data_size": 63488 00:19:23.373 } 00:19:23.373 ] 00:19:23.373 }' 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.373 19:39:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.965 [2024-12-05 19:39:17.195159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:23.965 [2024-12-05 19:39:17.195604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.965 [2024-12-05 19:39:17.195647] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:23.965 [2024-12-05 19:39:17.195663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.965 [2024-12-05 19:39:17.196420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.965 [2024-12-05 19:39:17.196462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:23.965 [2024-12-05 19:39:17.196569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:23.965 [2024-12-05 19:39:17.196601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:23.965 [2024-12-05 19:39:17.196792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:23.965 [2024-12-05 19:39:17.196807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:23.965 [2024-12-05 19:39:17.197129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:23.965 [2024-12-05 19:39:17.201879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:23.965 [2024-12-05 19:39:17.202045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:23.965 [2024-12-05 19:39:17.202491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.965 pt3 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.965 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.965 "name": "raid_bdev1", 00:19:23.965 "uuid": "d991dead-17e6-44d4-99b1-79f0dd473b04", 00:19:23.965 "strip_size_kb": 64, 00:19:23.965 "state": "online", 00:19:23.965 "raid_level": "raid5f", 00:19:23.965 "superblock": true, 00:19:23.965 "num_base_bdevs": 3, 00:19:23.965 "num_base_bdevs_discovered": 2, 00:19:23.965 "num_base_bdevs_operational": 2, 00:19:23.965 "base_bdevs_list": [ 00:19:23.965 { 00:19:23.965 "name": null, 00:19:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.965 "is_configured": false, 00:19:23.965 "data_offset": 2048, 00:19:23.965 "data_size": 63488 00:19:23.965 }, 00:19:23.965 { 00:19:23.965 "name": "pt2", 00:19:23.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.965 "is_configured": true, 00:19:23.965 "data_offset": 2048, 00:19:23.965 "data_size": 63488 00:19:23.965 }, 00:19:23.965 { 00:19:23.965 "name": "pt3", 00:19:23.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.966 "is_configured": true, 00:19:23.966 "data_offset": 2048, 00:19:23.966 "data_size": 63488 00:19:23.966 } 00:19:23.966 ] 00:19:23.966 }' 00:19:23.966 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.966 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.547 [2024-12-05 19:39:17.800471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d991dead-17e6-44d4-99b1-79f0dd473b04 '!=' d991dead-17e6-44d4-99b1-79f0dd473b04 ']' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81503 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81503 ']' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81503 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81503 00:19:24.547 killing process with pid 81503 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81503' 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81503 00:19:24.547 [2024-12-05 19:39:17.879630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.547 19:39:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81503 00:19:24.547 [2024-12-05 19:39:17.879782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.547 [2024-12-05 19:39:17.879893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.547 [2024-12-05 19:39:17.879915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:24.805 [2024-12-05 19:39:18.128591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.741 ************************************ 00:19:25.741 END TEST raid5f_superblock_test 00:19:25.741 ************************************ 00:19:25.741 19:39:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:25.741 00:19:25.741 real 0m8.763s 00:19:25.741 user 0m14.296s 00:19:25.741 sys 0m1.341s 00:19:25.741 19:39:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.741 19:39:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 19:39:19 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:25.998 19:39:19 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:19:25.998 19:39:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:25.998 19:39:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.998 19:39:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 ************************************ 00:19:25.998 START TEST raid5f_rebuild_test 00:19:25.998 ************************************ 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81959 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81959 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81959 ']' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.998 19:39:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.998 [2024-12-05 19:39:19.350861] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:19:25.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:25.998 Zero copy mechanism will not be used. 00:19:25.998 [2024-12-05 19:39:19.351057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81959 ] 00:19:26.255 [2024-12-05 19:39:19.538933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.255 [2024-12-05 19:39:19.669189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.512 [2024-12-05 19:39:19.875423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.512 [2024-12-05 19:39:19.875483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 BaseBdev1_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 [2024-12-05 19:39:20.371636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:27.077 [2024-12-05 19:39:20.371722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.077 [2024-12-05 19:39:20.371775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:27.077 [2024-12-05 19:39:20.371794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.077 [2024-12-05 19:39:20.374723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.077 [2024-12-05 19:39:20.374799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:27.077 BaseBdev1 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 BaseBdev2_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 [2024-12-05 19:39:20.423748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:27.077 [2024-12-05 19:39:20.423865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.077 [2024-12-05 19:39:20.423898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:27.077 [2024-12-05 19:39:20.423916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.077 [2024-12-05 19:39:20.426727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.077 [2024-12-05 19:39:20.426813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:27.077 BaseBdev2 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 BaseBdev3_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 [2024-12-05 19:39:20.485702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:27.077 [2024-12-05 19:39:20.485803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.077 [2024-12-05 19:39:20.485834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:27.077 [2024-12-05 19:39:20.485851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.077 [2024-12-05 19:39:20.488507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.077 [2024-12-05 19:39:20.488549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:27.077 BaseBdev3 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.077 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 spare_malloc 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 spare_delay 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 [2024-12-05 19:39:20.546621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.335 [2024-12-05 19:39:20.546749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.335 [2024-12-05 19:39:20.546778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:27.335 [2024-12-05 19:39:20.546796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.335 [2024-12-05 19:39:20.550037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.335 [2024-12-05 19:39:20.550116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.335 spare 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 [2024-12-05 19:39:20.554860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.335 [2024-12-05 19:39:20.557396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.335 [2024-12-05 19:39:20.557503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.335 [2024-12-05 19:39:20.557644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:27.335 [2024-12-05 19:39:20.557660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:27.335 [2024-12-05 19:39:20.557978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:27.335 [2024-12-05 19:39:20.563325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:27.335 [2024-12-05 19:39:20.563357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:27.335 [2024-12-05 19:39:20.563632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.335 "name": "raid_bdev1", 00:19:27.335 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:27.335 "strip_size_kb": 64, 00:19:27.335 "state": "online", 00:19:27.335 "raid_level": "raid5f", 00:19:27.335 "superblock": false, 00:19:27.335 "num_base_bdevs": 3, 00:19:27.335 "num_base_bdevs_discovered": 3, 00:19:27.335 "num_base_bdevs_operational": 3, 00:19:27.335 "base_bdevs_list": [ 00:19:27.335 { 00:19:27.335 "name": "BaseBdev1", 00:19:27.335 "uuid": "05fefb48-83be-5ee6-bf0a-8af7702d41c2", 00:19:27.335 "is_configured": true, 00:19:27.335 "data_offset": 0, 00:19:27.335 "data_size": 65536 00:19:27.335 }, 00:19:27.335 { 00:19:27.335 "name": "BaseBdev2", 00:19:27.335 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:27.335 "is_configured": true, 00:19:27.335 "data_offset": 0, 00:19:27.335 "data_size": 65536 00:19:27.335 }, 00:19:27.335 { 00:19:27.335 "name": "BaseBdev3", 00:19:27.335 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:27.335 "is_configured": true, 00:19:27.335 "data_offset": 0, 00:19:27.335 "data_size": 65536 00:19:27.335 } 00:19:27.335 ] 00:19:27.335 }' 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.335 19:39:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 [2024-12-05 19:39:21.082330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:27.900 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:27.901 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:27.901 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:27.901 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:27.901 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.901 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:28.159 [2024-12-05 19:39:21.494285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:28.159 /dev/nbd0 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.159 1+0 records in 00:19:28.159 1+0 records out 00:19:28.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293846 s, 13.9 MB/s 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:28.159 19:39:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:19:28.739 512+0 records in 00:19:28.739 512+0 records out 00:19:28.739 67108864 bytes (67 MB, 64 MiB) copied, 0.50144 s, 134 MB/s 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.739 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.011 [2024-12-05 19:39:22.376036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.011 [2024-12-05 19:39:22.385825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.011 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.280 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.280 "name": "raid_bdev1", 00:19:29.280 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:29.280 "strip_size_kb": 64, 00:19:29.280 "state": "online", 00:19:29.280 "raid_level": "raid5f", 00:19:29.280 "superblock": false, 00:19:29.280 "num_base_bdevs": 3, 00:19:29.280 "num_base_bdevs_discovered": 2, 00:19:29.280 "num_base_bdevs_operational": 2, 00:19:29.280 "base_bdevs_list": [ 00:19:29.280 { 00:19:29.280 "name": null, 00:19:29.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.280 "is_configured": false, 00:19:29.280 "data_offset": 0, 00:19:29.280 "data_size": 65536 00:19:29.280 }, 00:19:29.280 { 00:19:29.280 "name": "BaseBdev2", 00:19:29.280 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:29.280 "is_configured": true, 00:19:29.280 "data_offset": 0, 00:19:29.280 "data_size": 65536 00:19:29.280 }, 00:19:29.280 { 00:19:29.280 "name": "BaseBdev3", 00:19:29.280 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:29.280 "is_configured": true, 00:19:29.280 "data_offset": 0, 00:19:29.280 "data_size": 65536 00:19:29.280 } 00:19:29.280 ] 00:19:29.280 }' 00:19:29.280 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.280 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.538 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:29.538 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.538 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.538 [2024-12-05 19:39:22.918032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.538 [2024-12-05 19:39:22.934281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:19:29.538 19:39:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.538 19:39:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:29.538 [2024-12-05 19:39:22.942141] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.912 "name": "raid_bdev1", 00:19:30.912 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:30.912 "strip_size_kb": 64, 00:19:30.912 "state": "online", 00:19:30.912 "raid_level": "raid5f", 00:19:30.912 "superblock": false, 00:19:30.912 "num_base_bdevs": 3, 00:19:30.912 "num_base_bdevs_discovered": 3, 00:19:30.912 "num_base_bdevs_operational": 3, 00:19:30.912 "process": { 00:19:30.912 "type": "rebuild", 00:19:30.912 "target": "spare", 00:19:30.912 "progress": { 00:19:30.912 "blocks": 18432, 00:19:30.912 "percent": 14 00:19:30.912 } 00:19:30.912 }, 00:19:30.912 "base_bdevs_list": [ 00:19:30.912 { 00:19:30.912 "name": "spare", 00:19:30.912 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:30.912 "is_configured": true, 00:19:30.912 "data_offset": 0, 00:19:30.912 "data_size": 65536 00:19:30.912 }, 00:19:30.912 { 00:19:30.912 "name": "BaseBdev2", 00:19:30.912 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:30.912 "is_configured": true, 00:19:30.912 "data_offset": 0, 00:19:30.912 "data_size": 65536 00:19:30.912 }, 00:19:30.912 { 00:19:30.912 "name": "BaseBdev3", 00:19:30.912 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:30.912 "is_configured": true, 00:19:30.912 "data_offset": 0, 00:19:30.912 "data_size": 65536 00:19:30.912 } 00:19:30.912 ] 00:19:30.912 }' 00:19:30.912 19:39:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.912 [2024-12-05 19:39:24.109173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.912 [2024-12-05 19:39:24.158661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:30.912 [2024-12-05 19:39:24.158771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.912 [2024-12-05 19:39:24.158812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.912 [2024-12-05 19:39:24.158839] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.912 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.913 "name": "raid_bdev1", 00:19:30.913 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:30.913 "strip_size_kb": 64, 00:19:30.913 "state": "online", 00:19:30.913 "raid_level": "raid5f", 00:19:30.913 "superblock": false, 00:19:30.913 "num_base_bdevs": 3, 00:19:30.913 "num_base_bdevs_discovered": 2, 00:19:30.913 "num_base_bdevs_operational": 2, 00:19:30.913 "base_bdevs_list": [ 00:19:30.913 { 00:19:30.913 "name": null, 00:19:30.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.913 "is_configured": false, 00:19:30.913 "data_offset": 0, 00:19:30.913 "data_size": 65536 00:19:30.913 }, 00:19:30.913 { 00:19:30.913 "name": "BaseBdev2", 00:19:30.913 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:30.913 "is_configured": true, 00:19:30.913 "data_offset": 0, 00:19:30.913 "data_size": 65536 00:19:30.913 }, 00:19:30.913 { 00:19:30.913 "name": "BaseBdev3", 00:19:30.913 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:30.913 "is_configured": true, 00:19:30.913 "data_offset": 0, 00:19:30.913 "data_size": 65536 00:19:30.913 } 00:19:30.913 ] 00:19:30.913 }' 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.913 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.481 "name": "raid_bdev1", 00:19:31.481 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:31.481 "strip_size_kb": 64, 00:19:31.481 "state": "online", 00:19:31.481 "raid_level": "raid5f", 00:19:31.481 "superblock": false, 00:19:31.481 "num_base_bdevs": 3, 00:19:31.481 "num_base_bdevs_discovered": 2, 00:19:31.481 "num_base_bdevs_operational": 2, 00:19:31.481 "base_bdevs_list": [ 00:19:31.481 { 00:19:31.481 "name": null, 00:19:31.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.481 "is_configured": false, 00:19:31.481 "data_offset": 0, 00:19:31.481 "data_size": 65536 00:19:31.481 }, 00:19:31.481 { 00:19:31.481 "name": "BaseBdev2", 00:19:31.481 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:31.481 "is_configured": true, 00:19:31.481 "data_offset": 0, 00:19:31.481 "data_size": 65536 00:19:31.481 }, 00:19:31.481 { 00:19:31.481 "name": "BaseBdev3", 00:19:31.481 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:31.481 "is_configured": true, 00:19:31.481 "data_offset": 0, 00:19:31.481 "data_size": 65536 00:19:31.481 } 00:19:31.481 ] 00:19:31.481 }' 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.481 [2024-12-05 19:39:24.877351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.481 [2024-12-05 19:39:24.892028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.481 19:39:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:31.481 [2024-12-05 19:39:24.899217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.862 "name": "raid_bdev1", 00:19:32.862 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:32.862 "strip_size_kb": 64, 00:19:32.862 "state": "online", 00:19:32.862 "raid_level": "raid5f", 00:19:32.862 "superblock": false, 00:19:32.862 "num_base_bdevs": 3, 00:19:32.862 "num_base_bdevs_discovered": 3, 00:19:32.862 "num_base_bdevs_operational": 3, 00:19:32.862 "process": { 00:19:32.862 "type": "rebuild", 00:19:32.862 "target": "spare", 00:19:32.862 "progress": { 00:19:32.862 "blocks": 18432, 00:19:32.862 "percent": 14 00:19:32.862 } 00:19:32.862 }, 00:19:32.862 "base_bdevs_list": [ 00:19:32.862 { 00:19:32.862 "name": "spare", 00:19:32.862 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:32.862 "is_configured": true, 00:19:32.862 "data_offset": 0, 00:19:32.862 "data_size": 65536 00:19:32.862 }, 00:19:32.862 { 00:19:32.862 "name": "BaseBdev2", 00:19:32.862 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:32.862 "is_configured": true, 00:19:32.862 "data_offset": 0, 00:19:32.862 "data_size": 65536 00:19:32.862 }, 00:19:32.862 { 00:19:32.862 "name": "BaseBdev3", 00:19:32.862 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:32.862 "is_configured": true, 00:19:32.862 "data_offset": 0, 00:19:32.862 "data_size": 65536 00:19:32.862 } 00:19:32.862 ] 00:19:32.862 }' 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.862 19:39:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.862 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.862 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=600 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.863 "name": "raid_bdev1", 00:19:32.863 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:32.863 "strip_size_kb": 64, 00:19:32.863 "state": "online", 00:19:32.863 "raid_level": "raid5f", 00:19:32.863 "superblock": false, 00:19:32.863 "num_base_bdevs": 3, 00:19:32.863 "num_base_bdevs_discovered": 3, 00:19:32.863 "num_base_bdevs_operational": 3, 00:19:32.863 "process": { 00:19:32.863 "type": "rebuild", 00:19:32.863 "target": "spare", 00:19:32.863 "progress": { 00:19:32.863 "blocks": 22528, 00:19:32.863 "percent": 17 00:19:32.863 } 00:19:32.863 }, 00:19:32.863 "base_bdevs_list": [ 00:19:32.863 { 00:19:32.863 "name": "spare", 00:19:32.863 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 0, 00:19:32.863 "data_size": 65536 00:19:32.863 }, 00:19:32.863 { 00:19:32.863 "name": "BaseBdev2", 00:19:32.863 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 0, 00:19:32.863 "data_size": 65536 00:19:32.863 }, 00:19:32.863 { 00:19:32.863 "name": "BaseBdev3", 00:19:32.863 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 0, 00:19:32.863 "data_size": 65536 00:19:32.863 } 00:19:32.863 ] 00:19:32.863 }' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.863 19:39:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.799 19:39:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.058 "name": "raid_bdev1", 00:19:34.058 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:34.058 "strip_size_kb": 64, 00:19:34.058 "state": "online", 00:19:34.058 "raid_level": "raid5f", 00:19:34.058 "superblock": false, 00:19:34.058 "num_base_bdevs": 3, 00:19:34.058 "num_base_bdevs_discovered": 3, 00:19:34.058 "num_base_bdevs_operational": 3, 00:19:34.058 "process": { 00:19:34.058 "type": "rebuild", 00:19:34.058 "target": "spare", 00:19:34.058 "progress": { 00:19:34.058 "blocks": 47104, 00:19:34.058 "percent": 35 00:19:34.058 } 00:19:34.058 }, 00:19:34.058 "base_bdevs_list": [ 00:19:34.058 { 00:19:34.058 "name": "spare", 00:19:34.058 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:34.058 "is_configured": true, 00:19:34.058 "data_offset": 0, 00:19:34.058 "data_size": 65536 00:19:34.058 }, 00:19:34.058 { 00:19:34.058 "name": "BaseBdev2", 00:19:34.058 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:34.058 "is_configured": true, 00:19:34.058 "data_offset": 0, 00:19:34.058 "data_size": 65536 00:19:34.058 }, 00:19:34.058 { 00:19:34.058 "name": "BaseBdev3", 00:19:34.058 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:34.058 "is_configured": true, 00:19:34.058 "data_offset": 0, 00:19:34.058 "data_size": 65536 00:19:34.058 } 00:19:34.058 ] 00:19:34.058 }' 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.058 19:39:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.993 19:39:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.251 "name": "raid_bdev1", 00:19:35.251 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:35.251 "strip_size_kb": 64, 00:19:35.251 "state": "online", 00:19:35.251 "raid_level": "raid5f", 00:19:35.251 "superblock": false, 00:19:35.251 "num_base_bdevs": 3, 00:19:35.251 "num_base_bdevs_discovered": 3, 00:19:35.251 "num_base_bdevs_operational": 3, 00:19:35.251 "process": { 00:19:35.251 "type": "rebuild", 00:19:35.251 "target": "spare", 00:19:35.251 "progress": { 00:19:35.251 "blocks": 69632, 00:19:35.251 "percent": 53 00:19:35.251 } 00:19:35.251 }, 00:19:35.251 "base_bdevs_list": [ 00:19:35.251 { 00:19:35.251 "name": "spare", 00:19:35.251 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:35.251 "is_configured": true, 00:19:35.251 "data_offset": 0, 00:19:35.251 "data_size": 65536 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "name": "BaseBdev2", 00:19:35.251 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:35.251 "is_configured": true, 00:19:35.251 "data_offset": 0, 00:19:35.251 "data_size": 65536 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "name": "BaseBdev3", 00:19:35.251 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:35.251 "is_configured": true, 00:19:35.251 "data_offset": 0, 00:19:35.251 "data_size": 65536 00:19:35.251 } 00:19:35.251 ] 00:19:35.251 }' 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.251 19:39:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.187 "name": "raid_bdev1", 00:19:36.187 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:36.187 "strip_size_kb": 64, 00:19:36.187 "state": "online", 00:19:36.187 "raid_level": "raid5f", 00:19:36.187 "superblock": false, 00:19:36.187 "num_base_bdevs": 3, 00:19:36.187 "num_base_bdevs_discovered": 3, 00:19:36.187 "num_base_bdevs_operational": 3, 00:19:36.187 "process": { 00:19:36.187 "type": "rebuild", 00:19:36.187 "target": "spare", 00:19:36.187 "progress": { 00:19:36.187 "blocks": 94208, 00:19:36.187 "percent": 71 00:19:36.187 } 00:19:36.187 }, 00:19:36.187 "base_bdevs_list": [ 00:19:36.187 { 00:19:36.187 "name": "spare", 00:19:36.187 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:36.187 "is_configured": true, 00:19:36.187 "data_offset": 0, 00:19:36.187 "data_size": 65536 00:19:36.187 }, 00:19:36.187 { 00:19:36.187 "name": "BaseBdev2", 00:19:36.187 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:36.187 "is_configured": true, 00:19:36.187 "data_offset": 0, 00:19:36.187 "data_size": 65536 00:19:36.187 }, 00:19:36.187 { 00:19:36.187 "name": "BaseBdev3", 00:19:36.187 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:36.187 "is_configured": true, 00:19:36.187 "data_offset": 0, 00:19:36.187 "data_size": 65536 00:19:36.187 } 00:19:36.187 ] 00:19:36.187 }' 00:19:36.187 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.445 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.445 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.445 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.445 19:39:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.383 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.383 "name": "raid_bdev1", 00:19:37.383 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:37.383 "strip_size_kb": 64, 00:19:37.383 "state": "online", 00:19:37.383 "raid_level": "raid5f", 00:19:37.383 "superblock": false, 00:19:37.383 "num_base_bdevs": 3, 00:19:37.383 "num_base_bdevs_discovered": 3, 00:19:37.383 "num_base_bdevs_operational": 3, 00:19:37.383 "process": { 00:19:37.383 "type": "rebuild", 00:19:37.383 "target": "spare", 00:19:37.383 "progress": { 00:19:37.383 "blocks": 116736, 00:19:37.383 "percent": 89 00:19:37.383 } 00:19:37.383 }, 00:19:37.383 "base_bdevs_list": [ 00:19:37.383 { 00:19:37.383 "name": "spare", 00:19:37.383 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:37.383 "is_configured": true, 00:19:37.383 "data_offset": 0, 00:19:37.383 "data_size": 65536 00:19:37.383 }, 00:19:37.383 { 00:19:37.383 "name": "BaseBdev2", 00:19:37.383 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:37.383 "is_configured": true, 00:19:37.383 "data_offset": 0, 00:19:37.383 "data_size": 65536 00:19:37.384 }, 00:19:37.384 { 00:19:37.384 "name": "BaseBdev3", 00:19:37.384 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:37.384 "is_configured": true, 00:19:37.384 "data_offset": 0, 00:19:37.384 "data_size": 65536 00:19:37.384 } 00:19:37.384 ] 00:19:37.384 }' 00:19:37.384 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.642 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.642 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.642 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.642 19:39:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.210 [2024-12-05 19:39:31.383725] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.210 [2024-12-05 19:39:31.383867] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.210 [2024-12-05 19:39:31.383931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.469 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.728 19:39:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.728 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.728 "name": "raid_bdev1", 00:19:38.728 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:38.728 "strip_size_kb": 64, 00:19:38.728 "state": "online", 00:19:38.728 "raid_level": "raid5f", 00:19:38.728 "superblock": false, 00:19:38.728 "num_base_bdevs": 3, 00:19:38.728 "num_base_bdevs_discovered": 3, 00:19:38.728 "num_base_bdevs_operational": 3, 00:19:38.728 "base_bdevs_list": [ 00:19:38.728 { 00:19:38.728 "name": "spare", 00:19:38.728 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:38.728 "is_configured": true, 00:19:38.728 "data_offset": 0, 00:19:38.728 "data_size": 65536 00:19:38.728 }, 00:19:38.728 { 00:19:38.728 "name": "BaseBdev2", 00:19:38.728 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:38.728 "is_configured": true, 00:19:38.728 "data_offset": 0, 00:19:38.728 "data_size": 65536 00:19:38.728 }, 00:19:38.729 { 00:19:38.729 "name": "BaseBdev3", 00:19:38.729 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:38.729 "is_configured": true, 00:19:38.729 "data_offset": 0, 00:19:38.729 "data_size": 65536 00:19:38.729 } 00:19:38.729 ] 00:19:38.729 }' 00:19:38.729 19:39:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.729 "name": "raid_bdev1", 00:19:38.729 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:38.729 "strip_size_kb": 64, 00:19:38.729 "state": "online", 00:19:38.729 "raid_level": "raid5f", 00:19:38.729 "superblock": false, 00:19:38.729 "num_base_bdevs": 3, 00:19:38.729 "num_base_bdevs_discovered": 3, 00:19:38.729 "num_base_bdevs_operational": 3, 00:19:38.729 "base_bdevs_list": [ 00:19:38.729 { 00:19:38.729 "name": "spare", 00:19:38.729 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:38.729 "is_configured": true, 00:19:38.729 "data_offset": 0, 00:19:38.729 "data_size": 65536 00:19:38.729 }, 00:19:38.729 { 00:19:38.729 "name": "BaseBdev2", 00:19:38.729 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:38.729 "is_configured": true, 00:19:38.729 "data_offset": 0, 00:19:38.729 "data_size": 65536 00:19:38.729 }, 00:19:38.729 { 00:19:38.729 "name": "BaseBdev3", 00:19:38.729 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:38.729 "is_configured": true, 00:19:38.729 "data_offset": 0, 00:19:38.729 "data_size": 65536 00:19:38.729 } 00:19:38.729 ] 00:19:38.729 }' 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.729 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.987 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.987 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.987 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.988 "name": "raid_bdev1", 00:19:38.988 "uuid": "c2947fcc-1889-4761-868f-264f6f82af7a", 00:19:38.988 "strip_size_kb": 64, 00:19:38.988 "state": "online", 00:19:38.988 "raid_level": "raid5f", 00:19:38.988 "superblock": false, 00:19:38.988 "num_base_bdevs": 3, 00:19:38.988 "num_base_bdevs_discovered": 3, 00:19:38.988 "num_base_bdevs_operational": 3, 00:19:38.988 "base_bdevs_list": [ 00:19:38.988 { 00:19:38.988 "name": "spare", 00:19:38.988 "uuid": "3cf5f08f-f1f4-566f-b602-fd0d8988dcb1", 00:19:38.988 "is_configured": true, 00:19:38.988 "data_offset": 0, 00:19:38.988 "data_size": 65536 00:19:38.988 }, 00:19:38.988 { 00:19:38.988 "name": "BaseBdev2", 00:19:38.988 "uuid": "e97acd34-8cde-5c3d-8aaa-5b8db793c47d", 00:19:38.988 "is_configured": true, 00:19:38.988 "data_offset": 0, 00:19:38.988 "data_size": 65536 00:19:38.988 }, 00:19:38.988 { 00:19:38.988 "name": "BaseBdev3", 00:19:38.988 "uuid": "2a583aa6-3186-5c57-837a-42c6bccc4280", 00:19:38.988 "is_configured": true, 00:19:38.988 "data_offset": 0, 00:19:38.988 "data_size": 65536 00:19:38.988 } 00:19:38.988 ] 00:19:38.988 }' 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.988 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.554 [2024-12-05 19:39:32.756947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.554 [2024-12-05 19:39:32.757228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.554 [2024-12-05 19:39:32.757360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.554 [2024-12-05 19:39:32.757465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.554 [2024-12-05 19:39:32.757490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.554 19:39:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:39.813 /dev/nbd0 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.813 1+0 records in 00:19:39.813 1+0 records out 00:19:39.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242495 s, 16.9 MB/s 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.813 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:40.071 /dev/nbd1 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.071 1+0 records in 00:19:40.071 1+0 records out 00:19:40.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414306 s, 9.9 MB/s 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:40.071 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.072 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.072 19:39:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:40.072 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.072 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:40.072 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.331 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.590 19:39:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:41.156 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81959 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81959 ']' 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81959 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81959 00:19:41.157 killing process with pid 81959 00:19:41.157 Received shutdown signal, test time was about 60.000000 seconds 00:19:41.157 00:19:41.157 Latency(us) 00:19:41.157 [2024-12-05T19:39:34.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.157 [2024-12-05T19:39:34.598Z] =================================================================================================================== 00:19:41.157 [2024-12-05T19:39:34.598Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81959' 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81959 00:19:41.157 19:39:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81959 00:19:41.157 [2024-12-05 19:39:34.334076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.416 [2024-12-05 19:39:34.697655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.352 19:39:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:42.352 00:19:42.352 real 0m16.568s 00:19:42.352 user 0m21.200s 00:19:42.352 sys 0m2.096s 00:19:42.352 19:39:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.352 ************************************ 00:19:42.352 END TEST raid5f_rebuild_test 00:19:42.352 19:39:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.352 ************************************ 00:19:42.611 19:39:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:42.611 19:39:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:42.611 19:39:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.611 19:39:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.611 ************************************ 00:19:42.611 START TEST raid5f_rebuild_test_sb 00:19:42.611 ************************************ 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:42.611 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82412 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82412 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82412 ']' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.612 19:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.612 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.612 Zero copy mechanism will not be used. 00:19:42.612 [2024-12-05 19:39:35.956405] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:19:42.612 [2024-12-05 19:39:35.956579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82412 ] 00:19:42.871 [2024-12-05 19:39:36.141823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.871 [2024-12-05 19:39:36.272157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.130 [2024-12-05 19:39:36.474558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.130 [2024-12-05 19:39:36.474623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 BaseBdev1_malloc 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 [2024-12-05 19:39:37.018178] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.698 [2024-12-05 19:39:37.018256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.698 [2024-12-05 19:39:37.018289] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:43.698 [2024-12-05 19:39:37.018308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.698 [2024-12-05 19:39:37.021165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.698 [2024-12-05 19:39:37.021242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.698 BaseBdev1 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 BaseBdev2_malloc 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 [2024-12-05 19:39:37.074072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:43.698 [2024-12-05 19:39:37.074178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.698 [2024-12-05 19:39:37.074228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:43.698 [2024-12-05 19:39:37.074247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.698 [2024-12-05 19:39:37.077119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.698 [2024-12-05 19:39:37.077179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.698 BaseBdev2 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.698 BaseBdev3_malloc 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.698 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 [2024-12-05 19:39:37.139090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:43.959 [2024-12-05 19:39:37.139162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.959 [2024-12-05 19:39:37.139196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:43.959 [2024-12-05 19:39:37.139216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.959 [2024-12-05 19:39:37.141992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.959 [2024-12-05 19:39:37.142038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:43.959 BaseBdev3 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 spare_malloc 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 spare_delay 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 [2024-12-05 19:39:37.202699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.959 [2024-12-05 19:39:37.202814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.959 [2024-12-05 19:39:37.202841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:43.959 [2024-12-05 19:39:37.202859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.959 [2024-12-05 19:39:37.205941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.959 [2024-12-05 19:39:37.205989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.959 spare 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 [2024-12-05 19:39:37.210822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.959 [2024-12-05 19:39:37.213363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.959 [2024-12-05 19:39:37.213461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.959 [2024-12-05 19:39:37.213813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:43.959 [2024-12-05 19:39:37.213842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:43.959 [2024-12-05 19:39:37.214181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:43.959 [2024-12-05 19:39:37.219643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:43.959 [2024-12-05 19:39:37.219694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:43.959 [2024-12-05 19:39:37.219949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.959 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.959 "name": "raid_bdev1", 00:19:43.959 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:43.959 "strip_size_kb": 64, 00:19:43.959 "state": "online", 00:19:43.959 "raid_level": "raid5f", 00:19:43.959 "superblock": true, 00:19:43.959 "num_base_bdevs": 3, 00:19:43.959 "num_base_bdevs_discovered": 3, 00:19:43.959 "num_base_bdevs_operational": 3, 00:19:43.959 "base_bdevs_list": [ 00:19:43.959 { 00:19:43.959 "name": "BaseBdev1", 00:19:43.959 "uuid": "f90ab9d7-a800-5213-939f-889cac1510b3", 00:19:43.959 "is_configured": true, 00:19:43.959 "data_offset": 2048, 00:19:43.959 "data_size": 63488 00:19:43.959 }, 00:19:43.959 { 00:19:43.959 "name": "BaseBdev2", 00:19:43.959 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:43.959 "is_configured": true, 00:19:43.959 "data_offset": 2048, 00:19:43.959 "data_size": 63488 00:19:43.959 }, 00:19:43.959 { 00:19:43.959 "name": "BaseBdev3", 00:19:43.959 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:43.959 "is_configured": true, 00:19:43.959 "data_offset": 2048, 00:19:43.959 "data_size": 63488 00:19:43.960 } 00:19:43.960 ] 00:19:43.960 }' 00:19:43.960 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.960 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:44.529 [2024-12-05 19:39:37.742318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.529 19:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:44.787 [2024-12-05 19:39:38.110245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:44.787 /dev/nbd0 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.787 1+0 records in 00:19:44.787 1+0 records out 00:19:44.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280589 s, 14.6 MB/s 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:44.787 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:45.353 496+0 records in 00:19:45.353 496+0 records out 00:19:45.353 65011712 bytes (65 MB, 62 MiB) copied, 0.469324 s, 139 MB/s 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.353 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.612 [2024-12-05 19:39:38.946289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.612 [2024-12-05 19:39:38.956227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.612 19:39:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.612 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.612 "name": "raid_bdev1", 00:19:45.612 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:45.612 "strip_size_kb": 64, 00:19:45.612 "state": "online", 00:19:45.612 "raid_level": "raid5f", 00:19:45.612 "superblock": true, 00:19:45.612 "num_base_bdevs": 3, 00:19:45.612 "num_base_bdevs_discovered": 2, 00:19:45.612 "num_base_bdevs_operational": 2, 00:19:45.612 "base_bdevs_list": [ 00:19:45.612 { 00:19:45.612 "name": null, 00:19:45.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.612 "is_configured": false, 00:19:45.612 "data_offset": 0, 00:19:45.612 "data_size": 63488 00:19:45.612 }, 00:19:45.612 { 00:19:45.612 "name": "BaseBdev2", 00:19:45.612 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:45.612 "is_configured": true, 00:19:45.612 "data_offset": 2048, 00:19:45.612 "data_size": 63488 00:19:45.612 }, 00:19:45.612 { 00:19:45.612 "name": "BaseBdev3", 00:19:45.612 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:45.612 "is_configured": true, 00:19:45.612 "data_offset": 2048, 00:19:45.612 "data_size": 63488 00:19:45.612 } 00:19:45.612 ] 00:19:45.612 }' 00:19:45.612 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.612 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.179 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.179 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.179 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.179 [2024-12-05 19:39:39.476439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.179 [2024-12-05 19:39:39.492052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:46.179 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.179 19:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:46.179 [2024-12-05 19:39:39.499187] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.117 "name": "raid_bdev1", 00:19:47.117 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:47.117 "strip_size_kb": 64, 00:19:47.117 "state": "online", 00:19:47.117 "raid_level": "raid5f", 00:19:47.117 "superblock": true, 00:19:47.117 "num_base_bdevs": 3, 00:19:47.117 "num_base_bdevs_discovered": 3, 00:19:47.117 "num_base_bdevs_operational": 3, 00:19:47.117 "process": { 00:19:47.117 "type": "rebuild", 00:19:47.117 "target": "spare", 00:19:47.117 "progress": { 00:19:47.117 "blocks": 18432, 00:19:47.117 "percent": 14 00:19:47.117 } 00:19:47.117 }, 00:19:47.117 "base_bdevs_list": [ 00:19:47.117 { 00:19:47.117 "name": "spare", 00:19:47.117 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:47.117 "is_configured": true, 00:19:47.117 "data_offset": 2048, 00:19:47.117 "data_size": 63488 00:19:47.117 }, 00:19:47.117 { 00:19:47.117 "name": "BaseBdev2", 00:19:47.117 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:47.117 "is_configured": true, 00:19:47.117 "data_offset": 2048, 00:19:47.117 "data_size": 63488 00:19:47.117 }, 00:19:47.117 { 00:19:47.117 "name": "BaseBdev3", 00:19:47.117 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:47.117 "is_configured": true, 00:19:47.117 "data_offset": 2048, 00:19:47.117 "data_size": 63488 00:19:47.117 } 00:19:47.117 ] 00:19:47.117 }' 00:19:47.117 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.376 [2024-12-05 19:39:40.661453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.376 [2024-12-05 19:39:40.713816] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.376 [2024-12-05 19:39:40.714087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.376 [2024-12-05 19:39:40.714225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.376 [2024-12-05 19:39:40.714368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.376 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.377 "name": "raid_bdev1", 00:19:47.377 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:47.377 "strip_size_kb": 64, 00:19:47.377 "state": "online", 00:19:47.377 "raid_level": "raid5f", 00:19:47.377 "superblock": true, 00:19:47.377 "num_base_bdevs": 3, 00:19:47.377 "num_base_bdevs_discovered": 2, 00:19:47.377 "num_base_bdevs_operational": 2, 00:19:47.377 "base_bdevs_list": [ 00:19:47.377 { 00:19:47.377 "name": null, 00:19:47.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.377 "is_configured": false, 00:19:47.377 "data_offset": 0, 00:19:47.377 "data_size": 63488 00:19:47.377 }, 00:19:47.377 { 00:19:47.377 "name": "BaseBdev2", 00:19:47.377 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:47.377 "is_configured": true, 00:19:47.377 "data_offset": 2048, 00:19:47.377 "data_size": 63488 00:19:47.377 }, 00:19:47.377 { 00:19:47.377 "name": "BaseBdev3", 00:19:47.377 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:47.377 "is_configured": true, 00:19:47.377 "data_offset": 2048, 00:19:47.377 "data_size": 63488 00:19:47.377 } 00:19:47.377 ] 00:19:47.377 }' 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.377 19:39:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.945 "name": "raid_bdev1", 00:19:47.945 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:47.945 "strip_size_kb": 64, 00:19:47.945 "state": "online", 00:19:47.945 "raid_level": "raid5f", 00:19:47.945 "superblock": true, 00:19:47.945 "num_base_bdevs": 3, 00:19:47.945 "num_base_bdevs_discovered": 2, 00:19:47.945 "num_base_bdevs_operational": 2, 00:19:47.945 "base_bdevs_list": [ 00:19:47.945 { 00:19:47.945 "name": null, 00:19:47.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.945 "is_configured": false, 00:19:47.945 "data_offset": 0, 00:19:47.945 "data_size": 63488 00:19:47.945 }, 00:19:47.945 { 00:19:47.945 "name": "BaseBdev2", 00:19:47.945 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:47.945 "is_configured": true, 00:19:47.945 "data_offset": 2048, 00:19:47.945 "data_size": 63488 00:19:47.945 }, 00:19:47.945 { 00:19:47.945 "name": "BaseBdev3", 00:19:47.945 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:47.945 "is_configured": true, 00:19:47.945 "data_offset": 2048, 00:19:47.945 "data_size": 63488 00:19:47.945 } 00:19:47.945 ] 00:19:47.945 }' 00:19:47.945 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.204 [2024-12-05 19:39:41.458177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.204 [2024-12-05 19:39:41.472891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.204 19:39:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:48.204 [2024-12-05 19:39:41.480349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.141 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.141 "name": "raid_bdev1", 00:19:49.141 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:49.141 "strip_size_kb": 64, 00:19:49.141 "state": "online", 00:19:49.141 "raid_level": "raid5f", 00:19:49.141 "superblock": true, 00:19:49.141 "num_base_bdevs": 3, 00:19:49.141 "num_base_bdevs_discovered": 3, 00:19:49.141 "num_base_bdevs_operational": 3, 00:19:49.141 "process": { 00:19:49.141 "type": "rebuild", 00:19:49.141 "target": "spare", 00:19:49.141 "progress": { 00:19:49.141 "blocks": 18432, 00:19:49.141 "percent": 14 00:19:49.141 } 00:19:49.141 }, 00:19:49.141 "base_bdevs_list": [ 00:19:49.141 { 00:19:49.141 "name": "spare", 00:19:49.141 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:49.141 "is_configured": true, 00:19:49.141 "data_offset": 2048, 00:19:49.141 "data_size": 63488 00:19:49.141 }, 00:19:49.141 { 00:19:49.141 "name": "BaseBdev2", 00:19:49.141 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:49.141 "is_configured": true, 00:19:49.141 "data_offset": 2048, 00:19:49.142 "data_size": 63488 00:19:49.142 }, 00:19:49.142 { 00:19:49.142 "name": "BaseBdev3", 00:19:49.142 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:49.142 "is_configured": true, 00:19:49.142 "data_offset": 2048, 00:19:49.142 "data_size": 63488 00:19:49.142 } 00:19:49.142 ] 00:19:49.142 }' 00:19:49.142 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.400 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.400 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:49.401 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=616 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.401 "name": "raid_bdev1", 00:19:49.401 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:49.401 "strip_size_kb": 64, 00:19:49.401 "state": "online", 00:19:49.401 "raid_level": "raid5f", 00:19:49.401 "superblock": true, 00:19:49.401 "num_base_bdevs": 3, 00:19:49.401 "num_base_bdevs_discovered": 3, 00:19:49.401 "num_base_bdevs_operational": 3, 00:19:49.401 "process": { 00:19:49.401 "type": "rebuild", 00:19:49.401 "target": "spare", 00:19:49.401 "progress": { 00:19:49.401 "blocks": 22528, 00:19:49.401 "percent": 17 00:19:49.401 } 00:19:49.401 }, 00:19:49.401 "base_bdevs_list": [ 00:19:49.401 { 00:19:49.401 "name": "spare", 00:19:49.401 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:49.401 "is_configured": true, 00:19:49.401 "data_offset": 2048, 00:19:49.401 "data_size": 63488 00:19:49.401 }, 00:19:49.401 { 00:19:49.401 "name": "BaseBdev2", 00:19:49.401 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:49.401 "is_configured": true, 00:19:49.401 "data_offset": 2048, 00:19:49.401 "data_size": 63488 00:19:49.401 }, 00:19:49.401 { 00:19:49.401 "name": "BaseBdev3", 00:19:49.401 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:49.401 "is_configured": true, 00:19:49.401 "data_offset": 2048, 00:19:49.401 "data_size": 63488 00:19:49.401 } 00:19:49.401 ] 00:19:49.401 }' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.401 19:39:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.776 "name": "raid_bdev1", 00:19:50.776 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:50.776 "strip_size_kb": 64, 00:19:50.776 "state": "online", 00:19:50.776 "raid_level": "raid5f", 00:19:50.776 "superblock": true, 00:19:50.776 "num_base_bdevs": 3, 00:19:50.776 "num_base_bdevs_discovered": 3, 00:19:50.776 "num_base_bdevs_operational": 3, 00:19:50.776 "process": { 00:19:50.776 "type": "rebuild", 00:19:50.776 "target": "spare", 00:19:50.776 "progress": { 00:19:50.776 "blocks": 47104, 00:19:50.776 "percent": 37 00:19:50.776 } 00:19:50.776 }, 00:19:50.776 "base_bdevs_list": [ 00:19:50.776 { 00:19:50.776 "name": "spare", 00:19:50.776 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:50.776 "is_configured": true, 00:19:50.776 "data_offset": 2048, 00:19:50.776 "data_size": 63488 00:19:50.776 }, 00:19:50.776 { 00:19:50.776 "name": "BaseBdev2", 00:19:50.776 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:50.776 "is_configured": true, 00:19:50.776 "data_offset": 2048, 00:19:50.776 "data_size": 63488 00:19:50.776 }, 00:19:50.776 { 00:19:50.776 "name": "BaseBdev3", 00:19:50.776 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:50.776 "is_configured": true, 00:19:50.776 "data_offset": 2048, 00:19:50.776 "data_size": 63488 00:19:50.776 } 00:19:50.776 ] 00:19:50.776 }' 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.776 19:39:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.711 19:39:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.711 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.711 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.711 "name": "raid_bdev1", 00:19:51.711 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:51.711 "strip_size_kb": 64, 00:19:51.711 "state": "online", 00:19:51.711 "raid_level": "raid5f", 00:19:51.711 "superblock": true, 00:19:51.711 "num_base_bdevs": 3, 00:19:51.711 "num_base_bdevs_discovered": 3, 00:19:51.711 "num_base_bdevs_operational": 3, 00:19:51.711 "process": { 00:19:51.711 "type": "rebuild", 00:19:51.711 "target": "spare", 00:19:51.711 "progress": { 00:19:51.711 "blocks": 69632, 00:19:51.711 "percent": 54 00:19:51.711 } 00:19:51.711 }, 00:19:51.711 "base_bdevs_list": [ 00:19:51.711 { 00:19:51.711 "name": "spare", 00:19:51.711 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:51.711 "is_configured": true, 00:19:51.711 "data_offset": 2048, 00:19:51.711 "data_size": 63488 00:19:51.711 }, 00:19:51.711 { 00:19:51.711 "name": "BaseBdev2", 00:19:51.711 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:51.711 "is_configured": true, 00:19:51.711 "data_offset": 2048, 00:19:51.711 "data_size": 63488 00:19:51.711 }, 00:19:51.711 { 00:19:51.711 "name": "BaseBdev3", 00:19:51.711 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:51.711 "is_configured": true, 00:19:51.711 "data_offset": 2048, 00:19:51.711 "data_size": 63488 00:19:51.711 } 00:19:51.711 ] 00:19:51.711 }' 00:19:51.711 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.711 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.712 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.712 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.712 19:39:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.087 "name": "raid_bdev1", 00:19:53.087 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:53.087 "strip_size_kb": 64, 00:19:53.087 "state": "online", 00:19:53.087 "raid_level": "raid5f", 00:19:53.087 "superblock": true, 00:19:53.087 "num_base_bdevs": 3, 00:19:53.087 "num_base_bdevs_discovered": 3, 00:19:53.087 "num_base_bdevs_operational": 3, 00:19:53.087 "process": { 00:19:53.087 "type": "rebuild", 00:19:53.087 "target": "spare", 00:19:53.087 "progress": { 00:19:53.087 "blocks": 94208, 00:19:53.087 "percent": 74 00:19:53.087 } 00:19:53.087 }, 00:19:53.087 "base_bdevs_list": [ 00:19:53.087 { 00:19:53.087 "name": "spare", 00:19:53.087 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "name": "BaseBdev2", 00:19:53.087 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 }, 00:19:53.087 { 00:19:53.087 "name": "BaseBdev3", 00:19:53.087 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:53.087 "is_configured": true, 00:19:53.087 "data_offset": 2048, 00:19:53.087 "data_size": 63488 00:19:53.087 } 00:19:53.087 ] 00:19:53.087 }' 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.087 19:39:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.022 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.022 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.022 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.022 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.023 "name": "raid_bdev1", 00:19:54.023 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:54.023 "strip_size_kb": 64, 00:19:54.023 "state": "online", 00:19:54.023 "raid_level": "raid5f", 00:19:54.023 "superblock": true, 00:19:54.023 "num_base_bdevs": 3, 00:19:54.023 "num_base_bdevs_discovered": 3, 00:19:54.023 "num_base_bdevs_operational": 3, 00:19:54.023 "process": { 00:19:54.023 "type": "rebuild", 00:19:54.023 "target": "spare", 00:19:54.023 "progress": { 00:19:54.023 "blocks": 116736, 00:19:54.023 "percent": 91 00:19:54.023 } 00:19:54.023 }, 00:19:54.023 "base_bdevs_list": [ 00:19:54.023 { 00:19:54.023 "name": "spare", 00:19:54.023 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:54.023 "is_configured": true, 00:19:54.023 "data_offset": 2048, 00:19:54.023 "data_size": 63488 00:19:54.023 }, 00:19:54.023 { 00:19:54.023 "name": "BaseBdev2", 00:19:54.023 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:54.023 "is_configured": true, 00:19:54.023 "data_offset": 2048, 00:19:54.023 "data_size": 63488 00:19:54.023 }, 00:19:54.023 { 00:19:54.023 "name": "BaseBdev3", 00:19:54.023 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:54.023 "is_configured": true, 00:19:54.023 "data_offset": 2048, 00:19:54.023 "data_size": 63488 00:19:54.023 } 00:19:54.023 ] 00:19:54.023 }' 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.023 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.281 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.281 19:39:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.539 [2024-12-05 19:39:47.755791] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:54.539 [2024-12-05 19:39:47.755913] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:54.539 [2024-12-05 19:39:47.756069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.107 "name": "raid_bdev1", 00:19:55.107 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:55.107 "strip_size_kb": 64, 00:19:55.107 "state": "online", 00:19:55.107 "raid_level": "raid5f", 00:19:55.107 "superblock": true, 00:19:55.107 "num_base_bdevs": 3, 00:19:55.107 "num_base_bdevs_discovered": 3, 00:19:55.107 "num_base_bdevs_operational": 3, 00:19:55.107 "base_bdevs_list": [ 00:19:55.107 { 00:19:55.107 "name": "spare", 00:19:55.107 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:55.107 "is_configured": true, 00:19:55.107 "data_offset": 2048, 00:19:55.107 "data_size": 63488 00:19:55.107 }, 00:19:55.107 { 00:19:55.107 "name": "BaseBdev2", 00:19:55.107 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:55.107 "is_configured": true, 00:19:55.107 "data_offset": 2048, 00:19:55.107 "data_size": 63488 00:19:55.107 }, 00:19:55.107 { 00:19:55.107 "name": "BaseBdev3", 00:19:55.107 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:55.107 "is_configured": true, 00:19:55.107 "data_offset": 2048, 00:19:55.107 "data_size": 63488 00:19:55.107 } 00:19:55.107 ] 00:19:55.107 }' 00:19:55.107 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.366 "name": "raid_bdev1", 00:19:55.366 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:55.366 "strip_size_kb": 64, 00:19:55.366 "state": "online", 00:19:55.366 "raid_level": "raid5f", 00:19:55.366 "superblock": true, 00:19:55.366 "num_base_bdevs": 3, 00:19:55.366 "num_base_bdevs_discovered": 3, 00:19:55.366 "num_base_bdevs_operational": 3, 00:19:55.366 "base_bdevs_list": [ 00:19:55.366 { 00:19:55.366 "name": "spare", 00:19:55.366 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:55.366 "is_configured": true, 00:19:55.366 "data_offset": 2048, 00:19:55.366 "data_size": 63488 00:19:55.366 }, 00:19:55.366 { 00:19:55.366 "name": "BaseBdev2", 00:19:55.366 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:55.366 "is_configured": true, 00:19:55.366 "data_offset": 2048, 00:19:55.366 "data_size": 63488 00:19:55.366 }, 00:19:55.366 { 00:19:55.366 "name": "BaseBdev3", 00:19:55.366 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:55.366 "is_configured": true, 00:19:55.366 "data_offset": 2048, 00:19:55.366 "data_size": 63488 00:19:55.366 } 00:19:55.366 ] 00:19:55.366 }' 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.366 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.624 "name": "raid_bdev1", 00:19:55.624 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:55.624 "strip_size_kb": 64, 00:19:55.624 "state": "online", 00:19:55.624 "raid_level": "raid5f", 00:19:55.624 "superblock": true, 00:19:55.624 "num_base_bdevs": 3, 00:19:55.624 "num_base_bdevs_discovered": 3, 00:19:55.624 "num_base_bdevs_operational": 3, 00:19:55.624 "base_bdevs_list": [ 00:19:55.624 { 00:19:55.624 "name": "spare", 00:19:55.624 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:55.624 "is_configured": true, 00:19:55.624 "data_offset": 2048, 00:19:55.624 "data_size": 63488 00:19:55.624 }, 00:19:55.624 { 00:19:55.624 "name": "BaseBdev2", 00:19:55.624 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:55.624 "is_configured": true, 00:19:55.624 "data_offset": 2048, 00:19:55.624 "data_size": 63488 00:19:55.624 }, 00:19:55.624 { 00:19:55.624 "name": "BaseBdev3", 00:19:55.624 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:55.624 "is_configured": true, 00:19:55.624 "data_offset": 2048, 00:19:55.624 "data_size": 63488 00:19:55.624 } 00:19:55.624 ] 00:19:55.624 }' 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.624 19:39:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.190 [2024-12-05 19:39:49.347316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.190 [2024-12-05 19:39:49.347358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.190 [2024-12-05 19:39:49.347464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.190 [2024-12-05 19:39:49.347583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.190 [2024-12-05 19:39:49.347608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.190 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:56.448 /dev/nbd0 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.448 1+0 records in 00:19:56.448 1+0 records out 00:19:56.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394583 s, 10.4 MB/s 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.448 19:39:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:56.706 /dev/nbd1 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.706 1+0 records in 00:19:56.706 1+0 records out 00:19:56.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421855 s, 9.7 MB/s 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.706 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.964 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.573 19:39:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.857 [2024-12-05 19:39:51.036072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.857 [2024-12-05 19:39:51.036154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.857 [2024-12-05 19:39:51.036190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.857 [2024-12-05 19:39:51.036209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.857 [2024-12-05 19:39:51.039199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.857 [2024-12-05 19:39:51.039252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.857 [2024-12-05 19:39:51.039367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:57.857 [2024-12-05 19:39:51.039438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.857 [2024-12-05 19:39:51.039622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.857 [2024-12-05 19:39:51.039800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.857 spare 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.857 [2024-12-05 19:39:51.139939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:57.857 [2024-12-05 19:39:51.139978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:57.857 [2024-12-05 19:39:51.140328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:57.857 [2024-12-05 19:39:51.145254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:57.857 [2024-12-05 19:39:51.145283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:57.857 [2024-12-05 19:39:51.145531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.857 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.857 "name": "raid_bdev1", 00:19:57.857 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:57.857 "strip_size_kb": 64, 00:19:57.857 "state": "online", 00:19:57.857 "raid_level": "raid5f", 00:19:57.857 "superblock": true, 00:19:57.857 "num_base_bdevs": 3, 00:19:57.857 "num_base_bdevs_discovered": 3, 00:19:57.857 "num_base_bdevs_operational": 3, 00:19:57.857 "base_bdevs_list": [ 00:19:57.857 { 00:19:57.857 "name": "spare", 00:19:57.858 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:57.858 "is_configured": true, 00:19:57.858 "data_offset": 2048, 00:19:57.858 "data_size": 63488 00:19:57.858 }, 00:19:57.858 { 00:19:57.858 "name": "BaseBdev2", 00:19:57.858 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:57.858 "is_configured": true, 00:19:57.858 "data_offset": 2048, 00:19:57.858 "data_size": 63488 00:19:57.858 }, 00:19:57.858 { 00:19:57.858 "name": "BaseBdev3", 00:19:57.858 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:57.858 "is_configured": true, 00:19:57.858 "data_offset": 2048, 00:19:57.858 "data_size": 63488 00:19:57.858 } 00:19:57.858 ] 00:19:57.858 }' 00:19:57.858 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.858 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.425 "name": "raid_bdev1", 00:19:58.425 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:58.425 "strip_size_kb": 64, 00:19:58.425 "state": "online", 00:19:58.425 "raid_level": "raid5f", 00:19:58.425 "superblock": true, 00:19:58.425 "num_base_bdevs": 3, 00:19:58.425 "num_base_bdevs_discovered": 3, 00:19:58.425 "num_base_bdevs_operational": 3, 00:19:58.425 "base_bdevs_list": [ 00:19:58.425 { 00:19:58.425 "name": "spare", 00:19:58.425 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:19:58.425 "is_configured": true, 00:19:58.425 "data_offset": 2048, 00:19:58.425 "data_size": 63488 00:19:58.425 }, 00:19:58.425 { 00:19:58.425 "name": "BaseBdev2", 00:19:58.425 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:58.425 "is_configured": true, 00:19:58.425 "data_offset": 2048, 00:19:58.425 "data_size": 63488 00:19:58.425 }, 00:19:58.425 { 00:19:58.425 "name": "BaseBdev3", 00:19:58.425 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:58.425 "is_configured": true, 00:19:58.425 "data_offset": 2048, 00:19:58.425 "data_size": 63488 00:19:58.425 } 00:19:58.425 ] 00:19:58.425 }' 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.425 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.684 [2024-12-05 19:39:51.879367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.684 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.684 "name": "raid_bdev1", 00:19:58.684 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:19:58.684 "strip_size_kb": 64, 00:19:58.684 "state": "online", 00:19:58.684 "raid_level": "raid5f", 00:19:58.684 "superblock": true, 00:19:58.684 "num_base_bdevs": 3, 00:19:58.684 "num_base_bdevs_discovered": 2, 00:19:58.684 "num_base_bdevs_operational": 2, 00:19:58.684 "base_bdevs_list": [ 00:19:58.684 { 00:19:58.684 "name": null, 00:19:58.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.684 "is_configured": false, 00:19:58.684 "data_offset": 0, 00:19:58.684 "data_size": 63488 00:19:58.684 }, 00:19:58.684 { 00:19:58.684 "name": "BaseBdev2", 00:19:58.684 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:19:58.684 "is_configured": true, 00:19:58.685 "data_offset": 2048, 00:19:58.685 "data_size": 63488 00:19:58.685 }, 00:19:58.685 { 00:19:58.685 "name": "BaseBdev3", 00:19:58.685 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:19:58.685 "is_configured": true, 00:19:58.685 "data_offset": 2048, 00:19:58.685 "data_size": 63488 00:19:58.685 } 00:19:58.685 ] 00:19:58.685 }' 00:19:58.685 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.685 19:39:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.252 19:39:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:59.252 19:39:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.252 19:39:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.252 [2024-12-05 19:39:52.415554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.252 [2024-12-05 19:39:52.416045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.252 [2024-12-05 19:39:52.416085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:59.252 [2024-12-05 19:39:52.416140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.252 [2024-12-05 19:39:52.430519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:59.252 19:39:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.252 19:39:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:59.252 [2024-12-05 19:39:52.437864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.188 "name": "raid_bdev1", 00:20:00.188 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:00.188 "strip_size_kb": 64, 00:20:00.188 "state": "online", 00:20:00.188 "raid_level": "raid5f", 00:20:00.188 "superblock": true, 00:20:00.188 "num_base_bdevs": 3, 00:20:00.188 "num_base_bdevs_discovered": 3, 00:20:00.188 "num_base_bdevs_operational": 3, 00:20:00.188 "process": { 00:20:00.188 "type": "rebuild", 00:20:00.188 "target": "spare", 00:20:00.188 "progress": { 00:20:00.188 "blocks": 18432, 00:20:00.188 "percent": 14 00:20:00.188 } 00:20:00.188 }, 00:20:00.188 "base_bdevs_list": [ 00:20:00.188 { 00:20:00.188 "name": "spare", 00:20:00.188 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:20:00.188 "is_configured": true, 00:20:00.188 "data_offset": 2048, 00:20:00.188 "data_size": 63488 00:20:00.188 }, 00:20:00.188 { 00:20:00.188 "name": "BaseBdev2", 00:20:00.188 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:00.188 "is_configured": true, 00:20:00.188 "data_offset": 2048, 00:20:00.188 "data_size": 63488 00:20:00.188 }, 00:20:00.188 { 00:20:00.188 "name": "BaseBdev3", 00:20:00.188 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:00.188 "is_configured": true, 00:20:00.188 "data_offset": 2048, 00:20:00.188 "data_size": 63488 00:20:00.188 } 00:20:00.188 ] 00:20:00.188 }' 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.188 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.189 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.189 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.189 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.189 [2024-12-05 19:39:53.607541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.447 [2024-12-05 19:39:53.651417] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.447 [2024-12-05 19:39:53.651681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.447 [2024-12-05 19:39:53.651732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.447 [2024-12-05 19:39:53.651768] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.447 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.447 "name": "raid_bdev1", 00:20:00.447 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:00.447 "strip_size_kb": 64, 00:20:00.447 "state": "online", 00:20:00.447 "raid_level": "raid5f", 00:20:00.447 "superblock": true, 00:20:00.447 "num_base_bdevs": 3, 00:20:00.447 "num_base_bdevs_discovered": 2, 00:20:00.447 "num_base_bdevs_operational": 2, 00:20:00.447 "base_bdevs_list": [ 00:20:00.447 { 00:20:00.447 "name": null, 00:20:00.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.447 "is_configured": false, 00:20:00.447 "data_offset": 0, 00:20:00.447 "data_size": 63488 00:20:00.447 }, 00:20:00.447 { 00:20:00.447 "name": "BaseBdev2", 00:20:00.447 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:00.447 "is_configured": true, 00:20:00.447 "data_offset": 2048, 00:20:00.447 "data_size": 63488 00:20:00.447 }, 00:20:00.448 { 00:20:00.448 "name": "BaseBdev3", 00:20:00.448 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:00.448 "is_configured": true, 00:20:00.448 "data_offset": 2048, 00:20:00.448 "data_size": 63488 00:20:00.448 } 00:20:00.448 ] 00:20:00.448 }' 00:20:00.448 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.448 19:39:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.014 19:39:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:01.014 19:39:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.014 19:39:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.014 [2024-12-05 19:39:54.191131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:01.014 [2024-12-05 19:39:54.191427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.014 [2024-12-05 19:39:54.191472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:01.014 [2024-12-05 19:39:54.191495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.014 [2024-12-05 19:39:54.192168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.014 [2024-12-05 19:39:54.192202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:01.014 [2024-12-05 19:39:54.192336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:01.014 [2024-12-05 19:39:54.192366] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:01.014 [2024-12-05 19:39:54.192381] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:01.015 [2024-12-05 19:39:54.192422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.015 [2024-12-05 19:39:54.206384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:01.015 spare 00:20:01.015 19:39:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.015 19:39:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:01.015 [2024-12-05 19:39:54.213891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.955 "name": "raid_bdev1", 00:20:01.955 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:01.955 "strip_size_kb": 64, 00:20:01.955 "state": "online", 00:20:01.955 "raid_level": "raid5f", 00:20:01.955 "superblock": true, 00:20:01.955 "num_base_bdevs": 3, 00:20:01.955 "num_base_bdevs_discovered": 3, 00:20:01.955 "num_base_bdevs_operational": 3, 00:20:01.955 "process": { 00:20:01.955 "type": "rebuild", 00:20:01.955 "target": "spare", 00:20:01.955 "progress": { 00:20:01.955 "blocks": 18432, 00:20:01.955 "percent": 14 00:20:01.955 } 00:20:01.955 }, 00:20:01.955 "base_bdevs_list": [ 00:20:01.955 { 00:20:01.955 "name": "spare", 00:20:01.955 "uuid": "7164960b-4055-5cc6-a0e7-82d43c836943", 00:20:01.955 "is_configured": true, 00:20:01.955 "data_offset": 2048, 00:20:01.955 "data_size": 63488 00:20:01.955 }, 00:20:01.955 { 00:20:01.955 "name": "BaseBdev2", 00:20:01.955 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:01.955 "is_configured": true, 00:20:01.955 "data_offset": 2048, 00:20:01.955 "data_size": 63488 00:20:01.955 }, 00:20:01.955 { 00:20:01.955 "name": "BaseBdev3", 00:20:01.955 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:01.955 "is_configured": true, 00:20:01.955 "data_offset": 2048, 00:20:01.955 "data_size": 63488 00:20:01.955 } 00:20:01.955 ] 00:20:01.955 }' 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.955 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.955 [2024-12-05 19:39:55.379451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.214 [2024-12-05 19:39:55.427796] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.214 [2024-12-05 19:39:55.427891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.214 [2024-12-05 19:39:55.427922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.214 [2024-12-05 19:39:55.427935] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.214 "name": "raid_bdev1", 00:20:02.214 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:02.214 "strip_size_kb": 64, 00:20:02.214 "state": "online", 00:20:02.214 "raid_level": "raid5f", 00:20:02.214 "superblock": true, 00:20:02.214 "num_base_bdevs": 3, 00:20:02.214 "num_base_bdevs_discovered": 2, 00:20:02.214 "num_base_bdevs_operational": 2, 00:20:02.214 "base_bdevs_list": [ 00:20:02.214 { 00:20:02.214 "name": null, 00:20:02.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.214 "is_configured": false, 00:20:02.214 "data_offset": 0, 00:20:02.214 "data_size": 63488 00:20:02.214 }, 00:20:02.214 { 00:20:02.214 "name": "BaseBdev2", 00:20:02.214 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:02.214 "is_configured": true, 00:20:02.214 "data_offset": 2048, 00:20:02.214 "data_size": 63488 00:20:02.214 }, 00:20:02.214 { 00:20:02.214 "name": "BaseBdev3", 00:20:02.214 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:02.214 "is_configured": true, 00:20:02.214 "data_offset": 2048, 00:20:02.214 "data_size": 63488 00:20:02.214 } 00:20:02.214 ] 00:20:02.214 }' 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.214 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.781 19:39:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.781 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.781 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.781 "name": "raid_bdev1", 00:20:02.781 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:02.781 "strip_size_kb": 64, 00:20:02.781 "state": "online", 00:20:02.781 "raid_level": "raid5f", 00:20:02.781 "superblock": true, 00:20:02.781 "num_base_bdevs": 3, 00:20:02.781 "num_base_bdevs_discovered": 2, 00:20:02.781 "num_base_bdevs_operational": 2, 00:20:02.781 "base_bdevs_list": [ 00:20:02.781 { 00:20:02.781 "name": null, 00:20:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.781 "is_configured": false, 00:20:02.781 "data_offset": 0, 00:20:02.781 "data_size": 63488 00:20:02.781 }, 00:20:02.781 { 00:20:02.781 "name": "BaseBdev2", 00:20:02.781 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:02.781 "is_configured": true, 00:20:02.781 "data_offset": 2048, 00:20:02.781 "data_size": 63488 00:20:02.781 }, 00:20:02.781 { 00:20:02.781 "name": "BaseBdev3", 00:20:02.781 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:02.781 "is_configured": true, 00:20:02.781 "data_offset": 2048, 00:20:02.781 "data_size": 63488 00:20:02.781 } 00:20:02.781 ] 00:20:02.781 }' 00:20:02.781 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.781 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.781 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.782 [2024-12-05 19:39:56.178224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.782 [2024-12-05 19:39:56.178571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.782 [2024-12-05 19:39:56.178625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:02.782 [2024-12-05 19:39:56.178643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.782 [2024-12-05 19:39:56.179323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.782 [2024-12-05 19:39:56.179350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.782 [2024-12-05 19:39:56.179482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:02.782 [2024-12-05 19:39:56.179510] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.782 [2024-12-05 19:39:56.179537] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:02.782 [2024-12-05 19:39:56.179561] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:02.782 BaseBdev1 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.782 19:39:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.153 "name": "raid_bdev1", 00:20:04.153 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:04.153 "strip_size_kb": 64, 00:20:04.153 "state": "online", 00:20:04.153 "raid_level": "raid5f", 00:20:04.153 "superblock": true, 00:20:04.153 "num_base_bdevs": 3, 00:20:04.153 "num_base_bdevs_discovered": 2, 00:20:04.153 "num_base_bdevs_operational": 2, 00:20:04.153 "base_bdevs_list": [ 00:20:04.153 { 00:20:04.153 "name": null, 00:20:04.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.153 "is_configured": false, 00:20:04.153 "data_offset": 0, 00:20:04.153 "data_size": 63488 00:20:04.153 }, 00:20:04.153 { 00:20:04.153 "name": "BaseBdev2", 00:20:04.153 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:04.153 "is_configured": true, 00:20:04.153 "data_offset": 2048, 00:20:04.153 "data_size": 63488 00:20:04.153 }, 00:20:04.153 { 00:20:04.153 "name": "BaseBdev3", 00:20:04.153 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:04.153 "is_configured": true, 00:20:04.153 "data_offset": 2048, 00:20:04.153 "data_size": 63488 00:20:04.153 } 00:20:04.153 ] 00:20:04.153 }' 00:20:04.153 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.154 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.412 "name": "raid_bdev1", 00:20:04.412 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:04.412 "strip_size_kb": 64, 00:20:04.412 "state": "online", 00:20:04.412 "raid_level": "raid5f", 00:20:04.412 "superblock": true, 00:20:04.412 "num_base_bdevs": 3, 00:20:04.412 "num_base_bdevs_discovered": 2, 00:20:04.412 "num_base_bdevs_operational": 2, 00:20:04.412 "base_bdevs_list": [ 00:20:04.412 { 00:20:04.412 "name": null, 00:20:04.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.412 "is_configured": false, 00:20:04.412 "data_offset": 0, 00:20:04.412 "data_size": 63488 00:20:04.412 }, 00:20:04.412 { 00:20:04.412 "name": "BaseBdev2", 00:20:04.412 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:04.412 "is_configured": true, 00:20:04.412 "data_offset": 2048, 00:20:04.412 "data_size": 63488 00:20:04.412 }, 00:20:04.412 { 00:20:04.412 "name": "BaseBdev3", 00:20:04.412 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:04.412 "is_configured": true, 00:20:04.412 "data_offset": 2048, 00:20:04.412 "data_size": 63488 00:20:04.412 } 00:20:04.412 ] 00:20:04.412 }' 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.412 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:04.670 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.671 [2024-12-05 19:39:57.882921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.671 [2024-12-05 19:39:57.883282] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.671 [2024-12-05 19:39:57.883317] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.671 request: 00:20:04.671 { 00:20:04.671 "base_bdev": "BaseBdev1", 00:20:04.671 "raid_bdev": "raid_bdev1", 00:20:04.671 "method": "bdev_raid_add_base_bdev", 00:20:04.671 "req_id": 1 00:20:04.671 } 00:20:04.671 Got JSON-RPC error response 00:20:04.671 response: 00:20:04.671 { 00:20:04.671 "code": -22, 00:20:04.671 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:04.671 } 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.671 19:39:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.607 "name": "raid_bdev1", 00:20:05.607 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:05.607 "strip_size_kb": 64, 00:20:05.607 "state": "online", 00:20:05.607 "raid_level": "raid5f", 00:20:05.607 "superblock": true, 00:20:05.607 "num_base_bdevs": 3, 00:20:05.607 "num_base_bdevs_discovered": 2, 00:20:05.607 "num_base_bdevs_operational": 2, 00:20:05.607 "base_bdevs_list": [ 00:20:05.607 { 00:20:05.607 "name": null, 00:20:05.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.607 "is_configured": false, 00:20:05.607 "data_offset": 0, 00:20:05.607 "data_size": 63488 00:20:05.607 }, 00:20:05.607 { 00:20:05.607 "name": "BaseBdev2", 00:20:05.607 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:05.607 "is_configured": true, 00:20:05.607 "data_offset": 2048, 00:20:05.607 "data_size": 63488 00:20:05.607 }, 00:20:05.607 { 00:20:05.607 "name": "BaseBdev3", 00:20:05.607 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:05.607 "is_configured": true, 00:20:05.607 "data_offset": 2048, 00:20:05.607 "data_size": 63488 00:20:05.607 } 00:20:05.607 ] 00:20:05.607 }' 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.607 19:39:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.175 "name": "raid_bdev1", 00:20:06.175 "uuid": "6d32e682-f8c4-4faf-b607-3d4572f5fd90", 00:20:06.175 "strip_size_kb": 64, 00:20:06.175 "state": "online", 00:20:06.175 "raid_level": "raid5f", 00:20:06.175 "superblock": true, 00:20:06.175 "num_base_bdevs": 3, 00:20:06.175 "num_base_bdevs_discovered": 2, 00:20:06.175 "num_base_bdevs_operational": 2, 00:20:06.175 "base_bdevs_list": [ 00:20:06.175 { 00:20:06.175 "name": null, 00:20:06.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.175 "is_configured": false, 00:20:06.175 "data_offset": 0, 00:20:06.175 "data_size": 63488 00:20:06.175 }, 00:20:06.175 { 00:20:06.175 "name": "BaseBdev2", 00:20:06.175 "uuid": "26dcf336-0832-534f-8f42-833651e7ef76", 00:20:06.175 "is_configured": true, 00:20:06.175 "data_offset": 2048, 00:20:06.175 "data_size": 63488 00:20:06.175 }, 00:20:06.175 { 00:20:06.175 "name": "BaseBdev3", 00:20:06.175 "uuid": "86884189-731a-5e7f-8e49-8b954578122d", 00:20:06.175 "is_configured": true, 00:20:06.175 "data_offset": 2048, 00:20:06.175 "data_size": 63488 00:20:06.175 } 00:20:06.175 ] 00:20:06.175 }' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82412 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82412 ']' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82412 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82412 00:20:06.175 killing process with pid 82412 00:20:06.175 Received shutdown signal, test time was about 60.000000 seconds 00:20:06.175 00:20:06.175 Latency(us) 00:20:06.175 [2024-12-05T19:39:59.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.175 [2024-12-05T19:39:59.616Z] =================================================================================================================== 00:20:06.175 [2024-12-05T19:39:59.616Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82412' 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82412 00:20:06.175 [2024-12-05 19:39:59.603736] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.175 19:39:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82412 00:20:06.175 [2024-12-05 19:39:59.603928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.175 [2024-12-05 19:39:59.604015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.175 [2024-12-05 19:39:59.604038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:06.743 [2024-12-05 19:39:59.959385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.679 19:40:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:07.679 00:20:07.679 real 0m25.142s 00:20:07.679 user 0m33.495s 00:20:07.679 sys 0m2.746s 00:20:07.679 19:40:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.679 ************************************ 00:20:07.679 END TEST raid5f_rebuild_test_sb 00:20:07.679 ************************************ 00:20:07.679 19:40:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.679 19:40:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:07.679 19:40:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:07.679 19:40:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:07.679 19:40:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.679 19:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.679 ************************************ 00:20:07.679 START TEST raid5f_state_function_test 00:20:07.679 ************************************ 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83174 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83174' 00:20:07.679 Process raid pid: 83174 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83174 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83174 ']' 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.679 19:40:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.938 [2024-12-05 19:40:01.143944] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:07.938 [2024-12-05 19:40:01.144267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.938 [2024-12-05 19:40:01.314907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.196 [2024-12-05 19:40:01.441414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.456 [2024-12-05 19:40:01.652923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.456 [2024-12-05 19:40:01.653242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.715 [2024-12-05 19:40:02.135603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.715 [2024-12-05 19:40:02.135861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.715 [2024-12-05 19:40:02.135996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.715 [2024-12-05 19:40:02.136061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.715 [2024-12-05 19:40:02.136278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.715 [2024-12-05 19:40:02.136312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.715 [2024-12-05 19:40:02.136324] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:08.715 [2024-12-05 19:40:02.136339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.715 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.974 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.974 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.974 "name": "Existed_Raid", 00:20:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.974 "strip_size_kb": 64, 00:20:08.974 "state": "configuring", 00:20:08.974 "raid_level": "raid5f", 00:20:08.974 "superblock": false, 00:20:08.974 "num_base_bdevs": 4, 00:20:08.974 "num_base_bdevs_discovered": 0, 00:20:08.974 "num_base_bdevs_operational": 4, 00:20:08.974 "base_bdevs_list": [ 00:20:08.974 { 00:20:08.974 "name": "BaseBdev1", 00:20:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.974 "is_configured": false, 00:20:08.974 "data_offset": 0, 00:20:08.974 "data_size": 0 00:20:08.974 }, 00:20:08.974 { 00:20:08.974 "name": "BaseBdev2", 00:20:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.974 "is_configured": false, 00:20:08.974 "data_offset": 0, 00:20:08.974 "data_size": 0 00:20:08.974 }, 00:20:08.974 { 00:20:08.974 "name": "BaseBdev3", 00:20:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.974 "is_configured": false, 00:20:08.974 "data_offset": 0, 00:20:08.974 "data_size": 0 00:20:08.974 }, 00:20:08.974 { 00:20:08.974 "name": "BaseBdev4", 00:20:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.974 "is_configured": false, 00:20:08.974 "data_offset": 0, 00:20:08.974 "data_size": 0 00:20:08.974 } 00:20:08.974 ] 00:20:08.974 }' 00:20:08.974 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.974 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 [2024-12-05 19:40:02.655666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.232 [2024-12-05 19:40:02.655728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 [2024-12-05 19:40:02.667759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.232 [2024-12-05 19:40:02.667999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.232 [2024-12-05 19:40:02.668126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.232 [2024-12-05 19:40:02.668188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.232 [2024-12-05 19:40:02.668296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.232 [2024-12-05 19:40:02.668438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.232 [2024-12-05 19:40:02.668593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:09.232 [2024-12-05 19:40:02.668801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.232 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:09.233 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.233 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.492 [2024-12-05 19:40:02.714447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.492 BaseBdev1 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.492 [ 00:20:09.492 { 00:20:09.492 "name": "BaseBdev1", 00:20:09.492 "aliases": [ 00:20:09.492 "9b5bd4aa-636c-461e-923e-195edafcb57a" 00:20:09.492 ], 00:20:09.492 "product_name": "Malloc disk", 00:20:09.492 "block_size": 512, 00:20:09.492 "num_blocks": 65536, 00:20:09.492 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:09.492 "assigned_rate_limits": { 00:20:09.492 "rw_ios_per_sec": 0, 00:20:09.492 "rw_mbytes_per_sec": 0, 00:20:09.492 "r_mbytes_per_sec": 0, 00:20:09.492 "w_mbytes_per_sec": 0 00:20:09.492 }, 00:20:09.492 "claimed": true, 00:20:09.492 "claim_type": "exclusive_write", 00:20:09.492 "zoned": false, 00:20:09.492 "supported_io_types": { 00:20:09.492 "read": true, 00:20:09.492 "write": true, 00:20:09.492 "unmap": true, 00:20:09.492 "flush": true, 00:20:09.492 "reset": true, 00:20:09.492 "nvme_admin": false, 00:20:09.492 "nvme_io": false, 00:20:09.492 "nvme_io_md": false, 00:20:09.492 "write_zeroes": true, 00:20:09.492 "zcopy": true, 00:20:09.492 "get_zone_info": false, 00:20:09.492 "zone_management": false, 00:20:09.492 "zone_append": false, 00:20:09.492 "compare": false, 00:20:09.492 "compare_and_write": false, 00:20:09.492 "abort": true, 00:20:09.492 "seek_hole": false, 00:20:09.492 "seek_data": false, 00:20:09.492 "copy": true, 00:20:09.492 "nvme_iov_md": false 00:20:09.492 }, 00:20:09.492 "memory_domains": [ 00:20:09.492 { 00:20:09.492 "dma_device_id": "system", 00:20:09.492 "dma_device_type": 1 00:20:09.492 }, 00:20:09.492 { 00:20:09.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.492 "dma_device_type": 2 00:20:09.492 } 00:20:09.492 ], 00:20:09.492 "driver_specific": {} 00:20:09.492 } 00:20:09.492 ] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.492 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.492 "name": "Existed_Raid", 00:20:09.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.492 "strip_size_kb": 64, 00:20:09.492 "state": "configuring", 00:20:09.492 "raid_level": "raid5f", 00:20:09.492 "superblock": false, 00:20:09.492 "num_base_bdevs": 4, 00:20:09.492 "num_base_bdevs_discovered": 1, 00:20:09.492 "num_base_bdevs_operational": 4, 00:20:09.492 "base_bdevs_list": [ 00:20:09.492 { 00:20:09.492 "name": "BaseBdev1", 00:20:09.492 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:09.492 "is_configured": true, 00:20:09.492 "data_offset": 0, 00:20:09.492 "data_size": 65536 00:20:09.492 }, 00:20:09.492 { 00:20:09.492 "name": "BaseBdev2", 00:20:09.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.492 "is_configured": false, 00:20:09.492 "data_offset": 0, 00:20:09.492 "data_size": 0 00:20:09.492 }, 00:20:09.492 { 00:20:09.492 "name": "BaseBdev3", 00:20:09.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.492 "is_configured": false, 00:20:09.492 "data_offset": 0, 00:20:09.492 "data_size": 0 00:20:09.492 }, 00:20:09.492 { 00:20:09.492 "name": "BaseBdev4", 00:20:09.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.492 "is_configured": false, 00:20:09.492 "data_offset": 0, 00:20:09.492 "data_size": 0 00:20:09.492 } 00:20:09.492 ] 00:20:09.493 }' 00:20:09.493 19:40:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.493 19:40:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.061 [2024-12-05 19:40:03.274672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.061 [2024-12-05 19:40:03.274756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.061 [2024-12-05 19:40:03.286751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.061 [2024-12-05 19:40:03.289386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.061 [2024-12-05 19:40:03.289568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.061 [2024-12-05 19:40:03.289596] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.061 [2024-12-05 19:40:03.289616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.061 [2024-12-05 19:40:03.289627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:10.061 [2024-12-05 19:40:03.289640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.061 "name": "Existed_Raid", 00:20:10.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.061 "strip_size_kb": 64, 00:20:10.061 "state": "configuring", 00:20:10.061 "raid_level": "raid5f", 00:20:10.061 "superblock": false, 00:20:10.061 "num_base_bdevs": 4, 00:20:10.061 "num_base_bdevs_discovered": 1, 00:20:10.061 "num_base_bdevs_operational": 4, 00:20:10.061 "base_bdevs_list": [ 00:20:10.061 { 00:20:10.061 "name": "BaseBdev1", 00:20:10.061 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:10.061 "is_configured": true, 00:20:10.061 "data_offset": 0, 00:20:10.061 "data_size": 65536 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "BaseBdev2", 00:20:10.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.061 "is_configured": false, 00:20:10.061 "data_offset": 0, 00:20:10.061 "data_size": 0 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "BaseBdev3", 00:20:10.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.061 "is_configured": false, 00:20:10.061 "data_offset": 0, 00:20:10.061 "data_size": 0 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "BaseBdev4", 00:20:10.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.061 "is_configured": false, 00:20:10.061 "data_offset": 0, 00:20:10.061 "data_size": 0 00:20:10.061 } 00:20:10.061 ] 00:20:10.061 }' 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.061 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.628 BaseBdev2 00:20:10.628 [2024-12-05 19:40:03.843902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.628 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.629 [ 00:20:10.629 { 00:20:10.629 "name": "BaseBdev2", 00:20:10.629 "aliases": [ 00:20:10.629 "eb39dcae-3097-4904-89f6-e575f330f6ad" 00:20:10.629 ], 00:20:10.629 "product_name": "Malloc disk", 00:20:10.629 "block_size": 512, 00:20:10.629 "num_blocks": 65536, 00:20:10.629 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:10.629 "assigned_rate_limits": { 00:20:10.629 "rw_ios_per_sec": 0, 00:20:10.629 "rw_mbytes_per_sec": 0, 00:20:10.629 "r_mbytes_per_sec": 0, 00:20:10.629 "w_mbytes_per_sec": 0 00:20:10.629 }, 00:20:10.629 "claimed": true, 00:20:10.629 "claim_type": "exclusive_write", 00:20:10.629 "zoned": false, 00:20:10.629 "supported_io_types": { 00:20:10.629 "read": true, 00:20:10.629 "write": true, 00:20:10.629 "unmap": true, 00:20:10.629 "flush": true, 00:20:10.629 "reset": true, 00:20:10.629 "nvme_admin": false, 00:20:10.629 "nvme_io": false, 00:20:10.629 "nvme_io_md": false, 00:20:10.629 "write_zeroes": true, 00:20:10.629 "zcopy": true, 00:20:10.629 "get_zone_info": false, 00:20:10.629 "zone_management": false, 00:20:10.629 "zone_append": false, 00:20:10.629 "compare": false, 00:20:10.629 "compare_and_write": false, 00:20:10.629 "abort": true, 00:20:10.629 "seek_hole": false, 00:20:10.629 "seek_data": false, 00:20:10.629 "copy": true, 00:20:10.629 "nvme_iov_md": false 00:20:10.629 }, 00:20:10.629 "memory_domains": [ 00:20:10.629 { 00:20:10.629 "dma_device_id": "system", 00:20:10.629 "dma_device_type": 1 00:20:10.629 }, 00:20:10.629 { 00:20:10.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.629 "dma_device_type": 2 00:20:10.629 } 00:20:10.629 ], 00:20:10.629 "driver_specific": {} 00:20:10.629 } 00:20:10.629 ] 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.629 "name": "Existed_Raid", 00:20:10.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.629 "strip_size_kb": 64, 00:20:10.629 "state": "configuring", 00:20:10.629 "raid_level": "raid5f", 00:20:10.629 "superblock": false, 00:20:10.629 "num_base_bdevs": 4, 00:20:10.629 "num_base_bdevs_discovered": 2, 00:20:10.629 "num_base_bdevs_operational": 4, 00:20:10.629 "base_bdevs_list": [ 00:20:10.629 { 00:20:10.629 "name": "BaseBdev1", 00:20:10.629 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:10.629 "is_configured": true, 00:20:10.629 "data_offset": 0, 00:20:10.629 "data_size": 65536 00:20:10.629 }, 00:20:10.629 { 00:20:10.629 "name": "BaseBdev2", 00:20:10.629 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:10.629 "is_configured": true, 00:20:10.629 "data_offset": 0, 00:20:10.629 "data_size": 65536 00:20:10.629 }, 00:20:10.629 { 00:20:10.629 "name": "BaseBdev3", 00:20:10.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.629 "is_configured": false, 00:20:10.629 "data_offset": 0, 00:20:10.629 "data_size": 0 00:20:10.629 }, 00:20:10.629 { 00:20:10.629 "name": "BaseBdev4", 00:20:10.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.629 "is_configured": false, 00:20:10.629 "data_offset": 0, 00:20:10.629 "data_size": 0 00:20:10.629 } 00:20:10.629 ] 00:20:10.629 }' 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.629 19:40:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 [2024-12-05 19:40:04.435570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.198 BaseBdev3 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.198 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 [ 00:20:11.198 { 00:20:11.198 "name": "BaseBdev3", 00:20:11.198 "aliases": [ 00:20:11.198 "8e938f28-0fa1-4080-b0c8-25c84d3950d6" 00:20:11.198 ], 00:20:11.198 "product_name": "Malloc disk", 00:20:11.198 "block_size": 512, 00:20:11.198 "num_blocks": 65536, 00:20:11.198 "uuid": "8e938f28-0fa1-4080-b0c8-25c84d3950d6", 00:20:11.198 "assigned_rate_limits": { 00:20:11.198 "rw_ios_per_sec": 0, 00:20:11.198 "rw_mbytes_per_sec": 0, 00:20:11.198 "r_mbytes_per_sec": 0, 00:20:11.198 "w_mbytes_per_sec": 0 00:20:11.198 }, 00:20:11.198 "claimed": true, 00:20:11.198 "claim_type": "exclusive_write", 00:20:11.198 "zoned": false, 00:20:11.198 "supported_io_types": { 00:20:11.198 "read": true, 00:20:11.198 "write": true, 00:20:11.198 "unmap": true, 00:20:11.198 "flush": true, 00:20:11.198 "reset": true, 00:20:11.198 "nvme_admin": false, 00:20:11.198 "nvme_io": false, 00:20:11.198 "nvme_io_md": false, 00:20:11.198 "write_zeroes": true, 00:20:11.198 "zcopy": true, 00:20:11.198 "get_zone_info": false, 00:20:11.198 "zone_management": false, 00:20:11.198 "zone_append": false, 00:20:11.198 "compare": false, 00:20:11.198 "compare_and_write": false, 00:20:11.198 "abort": true, 00:20:11.198 "seek_hole": false, 00:20:11.198 "seek_data": false, 00:20:11.198 "copy": true, 00:20:11.198 "nvme_iov_md": false 00:20:11.198 }, 00:20:11.198 "memory_domains": [ 00:20:11.198 { 00:20:11.198 "dma_device_id": "system", 00:20:11.198 "dma_device_type": 1 00:20:11.198 }, 00:20:11.198 { 00:20:11.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.198 "dma_device_type": 2 00:20:11.198 } 00:20:11.199 ], 00:20:11.199 "driver_specific": {} 00:20:11.199 } 00:20:11.199 ] 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.199 "name": "Existed_Raid", 00:20:11.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.199 "strip_size_kb": 64, 00:20:11.199 "state": "configuring", 00:20:11.199 "raid_level": "raid5f", 00:20:11.199 "superblock": false, 00:20:11.199 "num_base_bdevs": 4, 00:20:11.199 "num_base_bdevs_discovered": 3, 00:20:11.199 "num_base_bdevs_operational": 4, 00:20:11.199 "base_bdevs_list": [ 00:20:11.199 { 00:20:11.199 "name": "BaseBdev1", 00:20:11.199 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:11.199 "is_configured": true, 00:20:11.199 "data_offset": 0, 00:20:11.199 "data_size": 65536 00:20:11.199 }, 00:20:11.199 { 00:20:11.199 "name": "BaseBdev2", 00:20:11.199 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:11.199 "is_configured": true, 00:20:11.199 "data_offset": 0, 00:20:11.199 "data_size": 65536 00:20:11.199 }, 00:20:11.199 { 00:20:11.199 "name": "BaseBdev3", 00:20:11.199 "uuid": "8e938f28-0fa1-4080-b0c8-25c84d3950d6", 00:20:11.199 "is_configured": true, 00:20:11.199 "data_offset": 0, 00:20:11.199 "data_size": 65536 00:20:11.199 }, 00:20:11.199 { 00:20:11.199 "name": "BaseBdev4", 00:20:11.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.199 "is_configured": false, 00:20:11.199 "data_offset": 0, 00:20:11.199 "data_size": 0 00:20:11.199 } 00:20:11.199 ] 00:20:11.199 }' 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.199 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.768 19:40:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.768 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.768 19:40:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.768 [2024-12-05 19:40:05.034198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.768 [2024-12-05 19:40:05.034476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.768 [2024-12-05 19:40:05.034502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:11.768 [2024-12-05 19:40:05.034886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:11.768 [2024-12-05 19:40:05.042921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.768 [2024-12-05 19:40:05.043153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:11.768 [2024-12-05 19:40:05.043621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.768 BaseBdev4 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.768 [ 00:20:11.768 { 00:20:11.768 "name": "BaseBdev4", 00:20:11.768 "aliases": [ 00:20:11.768 "4e5d9835-9e6a-464e-906d-656b9fba7fb7" 00:20:11.768 ], 00:20:11.768 "product_name": "Malloc disk", 00:20:11.768 "block_size": 512, 00:20:11.768 "num_blocks": 65536, 00:20:11.768 "uuid": "4e5d9835-9e6a-464e-906d-656b9fba7fb7", 00:20:11.768 "assigned_rate_limits": { 00:20:11.768 "rw_ios_per_sec": 0, 00:20:11.768 "rw_mbytes_per_sec": 0, 00:20:11.768 "r_mbytes_per_sec": 0, 00:20:11.768 "w_mbytes_per_sec": 0 00:20:11.768 }, 00:20:11.768 "claimed": true, 00:20:11.768 "claim_type": "exclusive_write", 00:20:11.768 "zoned": false, 00:20:11.768 "supported_io_types": { 00:20:11.768 "read": true, 00:20:11.768 "write": true, 00:20:11.768 "unmap": true, 00:20:11.768 "flush": true, 00:20:11.768 "reset": true, 00:20:11.768 "nvme_admin": false, 00:20:11.768 "nvme_io": false, 00:20:11.768 "nvme_io_md": false, 00:20:11.768 "write_zeroes": true, 00:20:11.768 "zcopy": true, 00:20:11.768 "get_zone_info": false, 00:20:11.768 "zone_management": false, 00:20:11.768 "zone_append": false, 00:20:11.768 "compare": false, 00:20:11.768 "compare_and_write": false, 00:20:11.768 "abort": true, 00:20:11.768 "seek_hole": false, 00:20:11.768 "seek_data": false, 00:20:11.768 "copy": true, 00:20:11.768 "nvme_iov_md": false 00:20:11.768 }, 00:20:11.768 "memory_domains": [ 00:20:11.768 { 00:20:11.768 "dma_device_id": "system", 00:20:11.768 "dma_device_type": 1 00:20:11.768 }, 00:20:11.768 { 00:20:11.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.768 "dma_device_type": 2 00:20:11.768 } 00:20:11.768 ], 00:20:11.768 "driver_specific": {} 00:20:11.768 } 00:20:11.768 ] 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.768 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.769 "name": "Existed_Raid", 00:20:11.769 "uuid": "e4f1e978-6b5e-411c-9c1d-a3c89b750e7c", 00:20:11.769 "strip_size_kb": 64, 00:20:11.769 "state": "online", 00:20:11.769 "raid_level": "raid5f", 00:20:11.769 "superblock": false, 00:20:11.769 "num_base_bdevs": 4, 00:20:11.769 "num_base_bdevs_discovered": 4, 00:20:11.769 "num_base_bdevs_operational": 4, 00:20:11.769 "base_bdevs_list": [ 00:20:11.769 { 00:20:11.769 "name": "BaseBdev1", 00:20:11.769 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:11.769 "is_configured": true, 00:20:11.769 "data_offset": 0, 00:20:11.769 "data_size": 65536 00:20:11.769 }, 00:20:11.769 { 00:20:11.769 "name": "BaseBdev2", 00:20:11.769 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:11.769 "is_configured": true, 00:20:11.769 "data_offset": 0, 00:20:11.769 "data_size": 65536 00:20:11.769 }, 00:20:11.769 { 00:20:11.769 "name": "BaseBdev3", 00:20:11.769 "uuid": "8e938f28-0fa1-4080-b0c8-25c84d3950d6", 00:20:11.769 "is_configured": true, 00:20:11.769 "data_offset": 0, 00:20:11.769 "data_size": 65536 00:20:11.769 }, 00:20:11.769 { 00:20:11.769 "name": "BaseBdev4", 00:20:11.769 "uuid": "4e5d9835-9e6a-464e-906d-656b9fba7fb7", 00:20:11.769 "is_configured": true, 00:20:11.769 "data_offset": 0, 00:20:11.769 "data_size": 65536 00:20:11.769 } 00:20:11.769 ] 00:20:11.769 }' 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.769 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.336 [2024-12-05 19:40:05.600035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.336 "name": "Existed_Raid", 00:20:12.336 "aliases": [ 00:20:12.336 "e4f1e978-6b5e-411c-9c1d-a3c89b750e7c" 00:20:12.336 ], 00:20:12.336 "product_name": "Raid Volume", 00:20:12.336 "block_size": 512, 00:20:12.336 "num_blocks": 196608, 00:20:12.336 "uuid": "e4f1e978-6b5e-411c-9c1d-a3c89b750e7c", 00:20:12.336 "assigned_rate_limits": { 00:20:12.336 "rw_ios_per_sec": 0, 00:20:12.336 "rw_mbytes_per_sec": 0, 00:20:12.336 "r_mbytes_per_sec": 0, 00:20:12.336 "w_mbytes_per_sec": 0 00:20:12.336 }, 00:20:12.336 "claimed": false, 00:20:12.336 "zoned": false, 00:20:12.336 "supported_io_types": { 00:20:12.336 "read": true, 00:20:12.336 "write": true, 00:20:12.336 "unmap": false, 00:20:12.336 "flush": false, 00:20:12.336 "reset": true, 00:20:12.336 "nvme_admin": false, 00:20:12.336 "nvme_io": false, 00:20:12.336 "nvme_io_md": false, 00:20:12.336 "write_zeroes": true, 00:20:12.336 "zcopy": false, 00:20:12.336 "get_zone_info": false, 00:20:12.336 "zone_management": false, 00:20:12.336 "zone_append": false, 00:20:12.336 "compare": false, 00:20:12.336 "compare_and_write": false, 00:20:12.336 "abort": false, 00:20:12.336 "seek_hole": false, 00:20:12.336 "seek_data": false, 00:20:12.336 "copy": false, 00:20:12.336 "nvme_iov_md": false 00:20:12.336 }, 00:20:12.336 "driver_specific": { 00:20:12.336 "raid": { 00:20:12.336 "uuid": "e4f1e978-6b5e-411c-9c1d-a3c89b750e7c", 00:20:12.336 "strip_size_kb": 64, 00:20:12.336 "state": "online", 00:20:12.336 "raid_level": "raid5f", 00:20:12.336 "superblock": false, 00:20:12.336 "num_base_bdevs": 4, 00:20:12.336 "num_base_bdevs_discovered": 4, 00:20:12.336 "num_base_bdevs_operational": 4, 00:20:12.336 "base_bdevs_list": [ 00:20:12.336 { 00:20:12.336 "name": "BaseBdev1", 00:20:12.336 "uuid": "9b5bd4aa-636c-461e-923e-195edafcb57a", 00:20:12.336 "is_configured": true, 00:20:12.336 "data_offset": 0, 00:20:12.336 "data_size": 65536 00:20:12.336 }, 00:20:12.336 { 00:20:12.336 "name": "BaseBdev2", 00:20:12.336 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:12.336 "is_configured": true, 00:20:12.336 "data_offset": 0, 00:20:12.336 "data_size": 65536 00:20:12.336 }, 00:20:12.336 { 00:20:12.336 "name": "BaseBdev3", 00:20:12.336 "uuid": "8e938f28-0fa1-4080-b0c8-25c84d3950d6", 00:20:12.336 "is_configured": true, 00:20:12.336 "data_offset": 0, 00:20:12.336 "data_size": 65536 00:20:12.336 }, 00:20:12.336 { 00:20:12.336 "name": "BaseBdev4", 00:20:12.336 "uuid": "4e5d9835-9e6a-464e-906d-656b9fba7fb7", 00:20:12.336 "is_configured": true, 00:20:12.336 "data_offset": 0, 00:20:12.336 "data_size": 65536 00:20:12.336 } 00:20:12.336 ] 00:20:12.336 } 00:20:12.336 } 00:20:12.336 }' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:12.336 BaseBdev2 00:20:12.336 BaseBdev3 00:20:12.336 BaseBdev4' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.336 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.596 19:40:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.596 [2024-12-05 19:40:05.988011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:12.855 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.856 "name": "Existed_Raid", 00:20:12.856 "uuid": "e4f1e978-6b5e-411c-9c1d-a3c89b750e7c", 00:20:12.856 "strip_size_kb": 64, 00:20:12.856 "state": "online", 00:20:12.856 "raid_level": "raid5f", 00:20:12.856 "superblock": false, 00:20:12.856 "num_base_bdevs": 4, 00:20:12.856 "num_base_bdevs_discovered": 3, 00:20:12.856 "num_base_bdevs_operational": 3, 00:20:12.856 "base_bdevs_list": [ 00:20:12.856 { 00:20:12.856 "name": null, 00:20:12.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.856 "is_configured": false, 00:20:12.856 "data_offset": 0, 00:20:12.856 "data_size": 65536 00:20:12.856 }, 00:20:12.856 { 00:20:12.856 "name": "BaseBdev2", 00:20:12.856 "uuid": "eb39dcae-3097-4904-89f6-e575f330f6ad", 00:20:12.856 "is_configured": true, 00:20:12.856 "data_offset": 0, 00:20:12.856 "data_size": 65536 00:20:12.856 }, 00:20:12.856 { 00:20:12.856 "name": "BaseBdev3", 00:20:12.856 "uuid": "8e938f28-0fa1-4080-b0c8-25c84d3950d6", 00:20:12.856 "is_configured": true, 00:20:12.856 "data_offset": 0, 00:20:12.856 "data_size": 65536 00:20:12.856 }, 00:20:12.856 { 00:20:12.856 "name": "BaseBdev4", 00:20:12.856 "uuid": "4e5d9835-9e6a-464e-906d-656b9fba7fb7", 00:20:12.856 "is_configured": true, 00:20:12.856 "data_offset": 0, 00:20:12.856 "data_size": 65536 00:20:12.856 } 00:20:12.856 ] 00:20:12.856 }' 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.856 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 [2024-12-05 19:40:06.671701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.424 [2024-12-05 19:40:06.672042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.424 [2024-12-05 19:40:06.758019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.424 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:13.425 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.425 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.425 [2024-12-05 19:40:06.822131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.683 19:40:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.683 [2024-12-05 19:40:06.979023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:13.683 [2024-12-05 19:40:06.979317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:13.683 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.684 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 BaseBdev2 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 [ 00:20:13.944 { 00:20:13.944 "name": "BaseBdev2", 00:20:13.944 "aliases": [ 00:20:13.944 "eb7273d3-1a62-42df-9bbb-5f03002ada35" 00:20:13.944 ], 00:20:13.944 "product_name": "Malloc disk", 00:20:13.944 "block_size": 512, 00:20:13.944 "num_blocks": 65536, 00:20:13.944 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:13.944 "assigned_rate_limits": { 00:20:13.944 "rw_ios_per_sec": 0, 00:20:13.944 "rw_mbytes_per_sec": 0, 00:20:13.944 "r_mbytes_per_sec": 0, 00:20:13.944 "w_mbytes_per_sec": 0 00:20:13.944 }, 00:20:13.944 "claimed": false, 00:20:13.944 "zoned": false, 00:20:13.944 "supported_io_types": { 00:20:13.944 "read": true, 00:20:13.944 "write": true, 00:20:13.944 "unmap": true, 00:20:13.944 "flush": true, 00:20:13.944 "reset": true, 00:20:13.944 "nvme_admin": false, 00:20:13.944 "nvme_io": false, 00:20:13.944 "nvme_io_md": false, 00:20:13.944 "write_zeroes": true, 00:20:13.944 "zcopy": true, 00:20:13.944 "get_zone_info": false, 00:20:13.944 "zone_management": false, 00:20:13.944 "zone_append": false, 00:20:13.944 "compare": false, 00:20:13.944 "compare_and_write": false, 00:20:13.944 "abort": true, 00:20:13.944 "seek_hole": false, 00:20:13.944 "seek_data": false, 00:20:13.944 "copy": true, 00:20:13.944 "nvme_iov_md": false 00:20:13.944 }, 00:20:13.944 "memory_domains": [ 00:20:13.944 { 00:20:13.944 "dma_device_id": "system", 00:20:13.944 "dma_device_type": 1 00:20:13.944 }, 00:20:13.944 { 00:20:13.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.944 "dma_device_type": 2 00:20:13.944 } 00:20:13.944 ], 00:20:13.944 "driver_specific": {} 00:20:13.944 } 00:20:13.944 ] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 BaseBdev3 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.944 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.944 [ 00:20:13.944 { 00:20:13.944 "name": "BaseBdev3", 00:20:13.944 "aliases": [ 00:20:13.944 "bc560bca-a934-4723-bccf-4b082664e7c8" 00:20:13.944 ], 00:20:13.944 "product_name": "Malloc disk", 00:20:13.944 "block_size": 512, 00:20:13.944 "num_blocks": 65536, 00:20:13.944 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:13.944 "assigned_rate_limits": { 00:20:13.944 "rw_ios_per_sec": 0, 00:20:13.944 "rw_mbytes_per_sec": 0, 00:20:13.944 "r_mbytes_per_sec": 0, 00:20:13.944 "w_mbytes_per_sec": 0 00:20:13.944 }, 00:20:13.944 "claimed": false, 00:20:13.944 "zoned": false, 00:20:13.944 "supported_io_types": { 00:20:13.945 "read": true, 00:20:13.945 "write": true, 00:20:13.945 "unmap": true, 00:20:13.945 "flush": true, 00:20:13.945 "reset": true, 00:20:13.945 "nvme_admin": false, 00:20:13.945 "nvme_io": false, 00:20:13.945 "nvme_io_md": false, 00:20:13.945 "write_zeroes": true, 00:20:13.945 "zcopy": true, 00:20:13.945 "get_zone_info": false, 00:20:13.945 "zone_management": false, 00:20:13.945 "zone_append": false, 00:20:13.945 "compare": false, 00:20:13.945 "compare_and_write": false, 00:20:13.945 "abort": true, 00:20:13.945 "seek_hole": false, 00:20:13.945 "seek_data": false, 00:20:13.945 "copy": true, 00:20:13.945 "nvme_iov_md": false 00:20:13.945 }, 00:20:13.945 "memory_domains": [ 00:20:13.945 { 00:20:13.945 "dma_device_id": "system", 00:20:13.945 "dma_device_type": 1 00:20:13.945 }, 00:20:13.945 { 00:20:13.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.945 "dma_device_type": 2 00:20:13.945 } 00:20:13.945 ], 00:20:13.945 "driver_specific": {} 00:20:13.945 } 00:20:13.945 ] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.945 BaseBdev4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.945 [ 00:20:13.945 { 00:20:13.945 "name": "BaseBdev4", 00:20:13.945 "aliases": [ 00:20:13.945 "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829" 00:20:13.945 ], 00:20:13.945 "product_name": "Malloc disk", 00:20:13.945 "block_size": 512, 00:20:13.945 "num_blocks": 65536, 00:20:13.945 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:13.945 "assigned_rate_limits": { 00:20:13.945 "rw_ios_per_sec": 0, 00:20:13.945 "rw_mbytes_per_sec": 0, 00:20:13.945 "r_mbytes_per_sec": 0, 00:20:13.945 "w_mbytes_per_sec": 0 00:20:13.945 }, 00:20:13.945 "claimed": false, 00:20:13.945 "zoned": false, 00:20:13.945 "supported_io_types": { 00:20:13.945 "read": true, 00:20:13.945 "write": true, 00:20:13.945 "unmap": true, 00:20:13.945 "flush": true, 00:20:13.945 "reset": true, 00:20:13.945 "nvme_admin": false, 00:20:13.945 "nvme_io": false, 00:20:13.945 "nvme_io_md": false, 00:20:13.945 "write_zeroes": true, 00:20:13.945 "zcopy": true, 00:20:13.945 "get_zone_info": false, 00:20:13.945 "zone_management": false, 00:20:13.945 "zone_append": false, 00:20:13.945 "compare": false, 00:20:13.945 "compare_and_write": false, 00:20:13.945 "abort": true, 00:20:13.945 "seek_hole": false, 00:20:13.945 "seek_data": false, 00:20:13.945 "copy": true, 00:20:13.945 "nvme_iov_md": false 00:20:13.945 }, 00:20:13.945 "memory_domains": [ 00:20:13.945 { 00:20:13.945 "dma_device_id": "system", 00:20:13.945 "dma_device_type": 1 00:20:13.945 }, 00:20:13.945 { 00:20:13.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.945 "dma_device_type": 2 00:20:13.945 } 00:20:13.945 ], 00:20:13.945 "driver_specific": {} 00:20:13.945 } 00:20:13.945 ] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.945 [2024-12-05 19:40:07.371163] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.945 [2024-12-05 19:40:07.371352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.945 [2024-12-05 19:40:07.371495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.945 [2024-12-05 19:40:07.374310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:13.945 [2024-12-05 19:40:07.374514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.945 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.946 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.203 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.203 "name": "Existed_Raid", 00:20:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.203 "strip_size_kb": 64, 00:20:14.203 "state": "configuring", 00:20:14.203 "raid_level": "raid5f", 00:20:14.203 "superblock": false, 00:20:14.203 "num_base_bdevs": 4, 00:20:14.203 "num_base_bdevs_discovered": 3, 00:20:14.203 "num_base_bdevs_operational": 4, 00:20:14.203 "base_bdevs_list": [ 00:20:14.203 { 00:20:14.204 "name": "BaseBdev1", 00:20:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.204 "is_configured": false, 00:20:14.204 "data_offset": 0, 00:20:14.204 "data_size": 0 00:20:14.204 }, 00:20:14.204 { 00:20:14.204 "name": "BaseBdev2", 00:20:14.204 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:14.204 "is_configured": true, 00:20:14.204 "data_offset": 0, 00:20:14.204 "data_size": 65536 00:20:14.204 }, 00:20:14.204 { 00:20:14.204 "name": "BaseBdev3", 00:20:14.204 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:14.204 "is_configured": true, 00:20:14.204 "data_offset": 0, 00:20:14.204 "data_size": 65536 00:20:14.204 }, 00:20:14.204 { 00:20:14.204 "name": "BaseBdev4", 00:20:14.204 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:14.204 "is_configured": true, 00:20:14.204 "data_offset": 0, 00:20:14.204 "data_size": 65536 00:20:14.204 } 00:20:14.204 ] 00:20:14.204 }' 00:20:14.204 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.204 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.462 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:14.462 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.462 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.462 [2024-12-05 19:40:07.899419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.722 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.722 "name": "Existed_Raid", 00:20:14.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.722 "strip_size_kb": 64, 00:20:14.722 "state": "configuring", 00:20:14.722 "raid_level": "raid5f", 00:20:14.722 "superblock": false, 00:20:14.722 "num_base_bdevs": 4, 00:20:14.723 "num_base_bdevs_discovered": 2, 00:20:14.723 "num_base_bdevs_operational": 4, 00:20:14.723 "base_bdevs_list": [ 00:20:14.723 { 00:20:14.723 "name": "BaseBdev1", 00:20:14.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.723 "is_configured": false, 00:20:14.723 "data_offset": 0, 00:20:14.723 "data_size": 0 00:20:14.723 }, 00:20:14.723 { 00:20:14.723 "name": null, 00:20:14.723 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:14.723 "is_configured": false, 00:20:14.723 "data_offset": 0, 00:20:14.723 "data_size": 65536 00:20:14.723 }, 00:20:14.723 { 00:20:14.723 "name": "BaseBdev3", 00:20:14.723 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:14.723 "is_configured": true, 00:20:14.723 "data_offset": 0, 00:20:14.723 "data_size": 65536 00:20:14.723 }, 00:20:14.723 { 00:20:14.723 "name": "BaseBdev4", 00:20:14.723 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:14.723 "is_configured": true, 00:20:14.723 "data_offset": 0, 00:20:14.723 "data_size": 65536 00:20:14.723 } 00:20:14.723 ] 00:20:14.723 }' 00:20:14.723 19:40:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.723 19:40:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.289 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.289 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.290 [2024-12-05 19:40:08.541591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.290 BaseBdev1 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.290 [ 00:20:15.290 { 00:20:15.290 "name": "BaseBdev1", 00:20:15.290 "aliases": [ 00:20:15.290 "385ada16-7293-4e0c-b7a3-19ed43b3cd80" 00:20:15.290 ], 00:20:15.290 "product_name": "Malloc disk", 00:20:15.290 "block_size": 512, 00:20:15.290 "num_blocks": 65536, 00:20:15.290 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:15.290 "assigned_rate_limits": { 00:20:15.290 "rw_ios_per_sec": 0, 00:20:15.290 "rw_mbytes_per_sec": 0, 00:20:15.290 "r_mbytes_per_sec": 0, 00:20:15.290 "w_mbytes_per_sec": 0 00:20:15.290 }, 00:20:15.290 "claimed": true, 00:20:15.290 "claim_type": "exclusive_write", 00:20:15.290 "zoned": false, 00:20:15.290 "supported_io_types": { 00:20:15.290 "read": true, 00:20:15.290 "write": true, 00:20:15.290 "unmap": true, 00:20:15.290 "flush": true, 00:20:15.290 "reset": true, 00:20:15.290 "nvme_admin": false, 00:20:15.290 "nvme_io": false, 00:20:15.290 "nvme_io_md": false, 00:20:15.290 "write_zeroes": true, 00:20:15.290 "zcopy": true, 00:20:15.290 "get_zone_info": false, 00:20:15.290 "zone_management": false, 00:20:15.290 "zone_append": false, 00:20:15.290 "compare": false, 00:20:15.290 "compare_and_write": false, 00:20:15.290 "abort": true, 00:20:15.290 "seek_hole": false, 00:20:15.290 "seek_data": false, 00:20:15.290 "copy": true, 00:20:15.290 "nvme_iov_md": false 00:20:15.290 }, 00:20:15.290 "memory_domains": [ 00:20:15.290 { 00:20:15.290 "dma_device_id": "system", 00:20:15.290 "dma_device_type": 1 00:20:15.290 }, 00:20:15.290 { 00:20:15.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.290 "dma_device_type": 2 00:20:15.290 } 00:20:15.290 ], 00:20:15.290 "driver_specific": {} 00:20:15.290 } 00:20:15.290 ] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.290 "name": "Existed_Raid", 00:20:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.290 "strip_size_kb": 64, 00:20:15.290 "state": "configuring", 00:20:15.290 "raid_level": "raid5f", 00:20:15.290 "superblock": false, 00:20:15.290 "num_base_bdevs": 4, 00:20:15.290 "num_base_bdevs_discovered": 3, 00:20:15.290 "num_base_bdevs_operational": 4, 00:20:15.290 "base_bdevs_list": [ 00:20:15.290 { 00:20:15.290 "name": "BaseBdev1", 00:20:15.290 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:15.290 "is_configured": true, 00:20:15.290 "data_offset": 0, 00:20:15.290 "data_size": 65536 00:20:15.290 }, 00:20:15.290 { 00:20:15.290 "name": null, 00:20:15.290 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:15.290 "is_configured": false, 00:20:15.290 "data_offset": 0, 00:20:15.290 "data_size": 65536 00:20:15.290 }, 00:20:15.290 { 00:20:15.290 "name": "BaseBdev3", 00:20:15.290 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:15.290 "is_configured": true, 00:20:15.290 "data_offset": 0, 00:20:15.290 "data_size": 65536 00:20:15.290 }, 00:20:15.290 { 00:20:15.290 "name": "BaseBdev4", 00:20:15.290 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:15.290 "is_configured": true, 00:20:15.290 "data_offset": 0, 00:20:15.290 "data_size": 65536 00:20:15.290 } 00:20:15.290 ] 00:20:15.290 }' 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.290 19:40:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 [2024-12-05 19:40:09.153964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.906 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.906 "name": "Existed_Raid", 00:20:15.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.906 "strip_size_kb": 64, 00:20:15.906 "state": "configuring", 00:20:15.906 "raid_level": "raid5f", 00:20:15.906 "superblock": false, 00:20:15.906 "num_base_bdevs": 4, 00:20:15.906 "num_base_bdevs_discovered": 2, 00:20:15.906 "num_base_bdevs_operational": 4, 00:20:15.906 "base_bdevs_list": [ 00:20:15.906 { 00:20:15.906 "name": "BaseBdev1", 00:20:15.906 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:15.906 "is_configured": true, 00:20:15.906 "data_offset": 0, 00:20:15.906 "data_size": 65536 00:20:15.906 }, 00:20:15.906 { 00:20:15.906 "name": null, 00:20:15.906 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:15.906 "is_configured": false, 00:20:15.906 "data_offset": 0, 00:20:15.906 "data_size": 65536 00:20:15.906 }, 00:20:15.906 { 00:20:15.906 "name": null, 00:20:15.906 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:15.906 "is_configured": false, 00:20:15.906 "data_offset": 0, 00:20:15.906 "data_size": 65536 00:20:15.906 }, 00:20:15.906 { 00:20:15.906 "name": "BaseBdev4", 00:20:15.906 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:15.906 "is_configured": true, 00:20:15.906 "data_offset": 0, 00:20:15.906 "data_size": 65536 00:20:15.907 } 00:20:15.907 ] 00:20:15.907 }' 00:20:15.907 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.907 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.473 [2024-12-05 19:40:09.750212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.473 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.474 "name": "Existed_Raid", 00:20:16.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.474 "strip_size_kb": 64, 00:20:16.474 "state": "configuring", 00:20:16.474 "raid_level": "raid5f", 00:20:16.474 "superblock": false, 00:20:16.474 "num_base_bdevs": 4, 00:20:16.474 "num_base_bdevs_discovered": 3, 00:20:16.474 "num_base_bdevs_operational": 4, 00:20:16.474 "base_bdevs_list": [ 00:20:16.474 { 00:20:16.474 "name": "BaseBdev1", 00:20:16.474 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:16.474 "is_configured": true, 00:20:16.474 "data_offset": 0, 00:20:16.474 "data_size": 65536 00:20:16.474 }, 00:20:16.474 { 00:20:16.474 "name": null, 00:20:16.474 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:16.474 "is_configured": false, 00:20:16.474 "data_offset": 0, 00:20:16.474 "data_size": 65536 00:20:16.474 }, 00:20:16.474 { 00:20:16.474 "name": "BaseBdev3", 00:20:16.474 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:16.474 "is_configured": true, 00:20:16.474 "data_offset": 0, 00:20:16.474 "data_size": 65536 00:20:16.474 }, 00:20:16.474 { 00:20:16.474 "name": "BaseBdev4", 00:20:16.474 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:16.474 "is_configured": true, 00:20:16.474 "data_offset": 0, 00:20:16.474 "data_size": 65536 00:20:16.474 } 00:20:16.474 ] 00:20:16.474 }' 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.474 19:40:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.039 [2024-12-05 19:40:10.322431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.039 "name": "Existed_Raid", 00:20:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.039 "strip_size_kb": 64, 00:20:17.039 "state": "configuring", 00:20:17.039 "raid_level": "raid5f", 00:20:17.039 "superblock": false, 00:20:17.039 "num_base_bdevs": 4, 00:20:17.039 "num_base_bdevs_discovered": 2, 00:20:17.039 "num_base_bdevs_operational": 4, 00:20:17.039 "base_bdevs_list": [ 00:20:17.039 { 00:20:17.039 "name": null, 00:20:17.039 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:17.039 "is_configured": false, 00:20:17.039 "data_offset": 0, 00:20:17.039 "data_size": 65536 00:20:17.039 }, 00:20:17.039 { 00:20:17.039 "name": null, 00:20:17.039 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:17.039 "is_configured": false, 00:20:17.039 "data_offset": 0, 00:20:17.039 "data_size": 65536 00:20:17.039 }, 00:20:17.039 { 00:20:17.039 "name": "BaseBdev3", 00:20:17.039 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:17.039 "is_configured": true, 00:20:17.039 "data_offset": 0, 00:20:17.039 "data_size": 65536 00:20:17.039 }, 00:20:17.039 { 00:20:17.039 "name": "BaseBdev4", 00:20:17.039 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:17.039 "is_configured": true, 00:20:17.039 "data_offset": 0, 00:20:17.039 "data_size": 65536 00:20:17.039 } 00:20:17.039 ] 00:20:17.039 }' 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.039 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.603 19:40:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.603 [2024-12-05 19:40:10.999872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.603 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.603 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.603 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.604 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.861 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.861 "name": "Existed_Raid", 00:20:17.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.861 "strip_size_kb": 64, 00:20:17.861 "state": "configuring", 00:20:17.861 "raid_level": "raid5f", 00:20:17.861 "superblock": false, 00:20:17.861 "num_base_bdevs": 4, 00:20:17.861 "num_base_bdevs_discovered": 3, 00:20:17.861 "num_base_bdevs_operational": 4, 00:20:17.861 "base_bdevs_list": [ 00:20:17.861 { 00:20:17.861 "name": null, 00:20:17.861 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:17.861 "is_configured": false, 00:20:17.861 "data_offset": 0, 00:20:17.861 "data_size": 65536 00:20:17.861 }, 00:20:17.861 { 00:20:17.861 "name": "BaseBdev2", 00:20:17.861 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:17.861 "is_configured": true, 00:20:17.861 "data_offset": 0, 00:20:17.861 "data_size": 65536 00:20:17.861 }, 00:20:17.861 { 00:20:17.861 "name": "BaseBdev3", 00:20:17.861 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:17.861 "is_configured": true, 00:20:17.861 "data_offset": 0, 00:20:17.861 "data_size": 65536 00:20:17.861 }, 00:20:17.861 { 00:20:17.861 "name": "BaseBdev4", 00:20:17.861 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:17.861 "is_configured": true, 00:20:17.861 "data_offset": 0, 00:20:17.861 "data_size": 65536 00:20:17.861 } 00:20:17.861 ] 00:20:17.861 }' 00:20:17.861 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.861 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.119 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.119 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.119 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.120 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.120 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 385ada16-7293-4e0c-b7a3-19ed43b3cd80 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.378 [2024-12-05 19:40:11.664002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.378 [2024-12-05 19:40:11.664349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:18.378 [2024-12-05 19:40:11.664374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:18.378 [2024-12-05 19:40:11.664784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:18.378 [2024-12-05 19:40:11.671825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:18.378 [2024-12-05 19:40:11.671997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:18.378 [2024-12-05 19:40:11.672484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.378 NewBaseBdev 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.378 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.378 [ 00:20:18.378 { 00:20:18.378 "name": "NewBaseBdev", 00:20:18.378 "aliases": [ 00:20:18.378 "385ada16-7293-4e0c-b7a3-19ed43b3cd80" 00:20:18.378 ], 00:20:18.378 "product_name": "Malloc disk", 00:20:18.378 "block_size": 512, 00:20:18.378 "num_blocks": 65536, 00:20:18.378 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:18.378 "assigned_rate_limits": { 00:20:18.378 "rw_ios_per_sec": 0, 00:20:18.378 "rw_mbytes_per_sec": 0, 00:20:18.378 "r_mbytes_per_sec": 0, 00:20:18.378 "w_mbytes_per_sec": 0 00:20:18.378 }, 00:20:18.378 "claimed": true, 00:20:18.378 "claim_type": "exclusive_write", 00:20:18.378 "zoned": false, 00:20:18.378 "supported_io_types": { 00:20:18.378 "read": true, 00:20:18.378 "write": true, 00:20:18.378 "unmap": true, 00:20:18.378 "flush": true, 00:20:18.378 "reset": true, 00:20:18.379 "nvme_admin": false, 00:20:18.379 "nvme_io": false, 00:20:18.379 "nvme_io_md": false, 00:20:18.379 "write_zeroes": true, 00:20:18.379 "zcopy": true, 00:20:18.379 "get_zone_info": false, 00:20:18.379 "zone_management": false, 00:20:18.379 "zone_append": false, 00:20:18.379 "compare": false, 00:20:18.379 "compare_and_write": false, 00:20:18.379 "abort": true, 00:20:18.379 "seek_hole": false, 00:20:18.379 "seek_data": false, 00:20:18.379 "copy": true, 00:20:18.379 "nvme_iov_md": false 00:20:18.379 }, 00:20:18.379 "memory_domains": [ 00:20:18.379 { 00:20:18.379 "dma_device_id": "system", 00:20:18.379 "dma_device_type": 1 00:20:18.379 }, 00:20:18.379 { 00:20:18.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.379 "dma_device_type": 2 00:20:18.379 } 00:20:18.379 ], 00:20:18.379 "driver_specific": {} 00:20:18.379 } 00:20:18.379 ] 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.379 "name": "Existed_Raid", 00:20:18.379 "uuid": "d050f77e-0750-47b6-abb8-5fd9d481478a", 00:20:18.379 "strip_size_kb": 64, 00:20:18.379 "state": "online", 00:20:18.379 "raid_level": "raid5f", 00:20:18.379 "superblock": false, 00:20:18.379 "num_base_bdevs": 4, 00:20:18.379 "num_base_bdevs_discovered": 4, 00:20:18.379 "num_base_bdevs_operational": 4, 00:20:18.379 "base_bdevs_list": [ 00:20:18.379 { 00:20:18.379 "name": "NewBaseBdev", 00:20:18.379 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:18.379 "is_configured": true, 00:20:18.379 "data_offset": 0, 00:20:18.379 "data_size": 65536 00:20:18.379 }, 00:20:18.379 { 00:20:18.379 "name": "BaseBdev2", 00:20:18.379 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:18.379 "is_configured": true, 00:20:18.379 "data_offset": 0, 00:20:18.379 "data_size": 65536 00:20:18.379 }, 00:20:18.379 { 00:20:18.379 "name": "BaseBdev3", 00:20:18.379 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:18.379 "is_configured": true, 00:20:18.379 "data_offset": 0, 00:20:18.379 "data_size": 65536 00:20:18.379 }, 00:20:18.379 { 00:20:18.379 "name": "BaseBdev4", 00:20:18.379 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:18.379 "is_configured": true, 00:20:18.379 "data_offset": 0, 00:20:18.379 "data_size": 65536 00:20:18.379 } 00:20:18.379 ] 00:20:18.379 }' 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.379 19:40:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.946 [2024-12-05 19:40:12.216649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.946 "name": "Existed_Raid", 00:20:18.946 "aliases": [ 00:20:18.946 "d050f77e-0750-47b6-abb8-5fd9d481478a" 00:20:18.946 ], 00:20:18.946 "product_name": "Raid Volume", 00:20:18.946 "block_size": 512, 00:20:18.946 "num_blocks": 196608, 00:20:18.946 "uuid": "d050f77e-0750-47b6-abb8-5fd9d481478a", 00:20:18.946 "assigned_rate_limits": { 00:20:18.946 "rw_ios_per_sec": 0, 00:20:18.946 "rw_mbytes_per_sec": 0, 00:20:18.946 "r_mbytes_per_sec": 0, 00:20:18.946 "w_mbytes_per_sec": 0 00:20:18.946 }, 00:20:18.946 "claimed": false, 00:20:18.946 "zoned": false, 00:20:18.946 "supported_io_types": { 00:20:18.946 "read": true, 00:20:18.946 "write": true, 00:20:18.946 "unmap": false, 00:20:18.946 "flush": false, 00:20:18.946 "reset": true, 00:20:18.946 "nvme_admin": false, 00:20:18.946 "nvme_io": false, 00:20:18.946 "nvme_io_md": false, 00:20:18.946 "write_zeroes": true, 00:20:18.946 "zcopy": false, 00:20:18.946 "get_zone_info": false, 00:20:18.946 "zone_management": false, 00:20:18.946 "zone_append": false, 00:20:18.946 "compare": false, 00:20:18.946 "compare_and_write": false, 00:20:18.946 "abort": false, 00:20:18.946 "seek_hole": false, 00:20:18.946 "seek_data": false, 00:20:18.946 "copy": false, 00:20:18.946 "nvme_iov_md": false 00:20:18.946 }, 00:20:18.946 "driver_specific": { 00:20:18.946 "raid": { 00:20:18.946 "uuid": "d050f77e-0750-47b6-abb8-5fd9d481478a", 00:20:18.946 "strip_size_kb": 64, 00:20:18.946 "state": "online", 00:20:18.946 "raid_level": "raid5f", 00:20:18.946 "superblock": false, 00:20:18.946 "num_base_bdevs": 4, 00:20:18.946 "num_base_bdevs_discovered": 4, 00:20:18.946 "num_base_bdevs_operational": 4, 00:20:18.946 "base_bdevs_list": [ 00:20:18.946 { 00:20:18.946 "name": "NewBaseBdev", 00:20:18.946 "uuid": "385ada16-7293-4e0c-b7a3-19ed43b3cd80", 00:20:18.946 "is_configured": true, 00:20:18.946 "data_offset": 0, 00:20:18.946 "data_size": 65536 00:20:18.946 }, 00:20:18.946 { 00:20:18.946 "name": "BaseBdev2", 00:20:18.946 "uuid": "eb7273d3-1a62-42df-9bbb-5f03002ada35", 00:20:18.946 "is_configured": true, 00:20:18.946 "data_offset": 0, 00:20:18.946 "data_size": 65536 00:20:18.946 }, 00:20:18.946 { 00:20:18.946 "name": "BaseBdev3", 00:20:18.946 "uuid": "bc560bca-a934-4723-bccf-4b082664e7c8", 00:20:18.946 "is_configured": true, 00:20:18.946 "data_offset": 0, 00:20:18.946 "data_size": 65536 00:20:18.946 }, 00:20:18.946 { 00:20:18.946 "name": "BaseBdev4", 00:20:18.946 "uuid": "2dad866c-8ccb-4f9b-8b05-b44a2b0f6829", 00:20:18.946 "is_configured": true, 00:20:18.946 "data_offset": 0, 00:20:18.946 "data_size": 65536 00:20:18.946 } 00:20:18.946 ] 00:20:18.946 } 00:20:18.946 } 00:20:18.946 }' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:18.946 BaseBdev2 00:20:18.946 BaseBdev3 00:20:18.946 BaseBdev4' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:18.946 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.947 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.947 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.947 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.205 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.206 [2024-12-05 19:40:12.564468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:19.206 [2024-12-05 19:40:12.564664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.206 [2024-12-05 19:40:12.564911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.206 [2024-12-05 19:40:12.565540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.206 [2024-12-05 19:40:12.565757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83174 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83174 ']' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83174 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83174 00:20:19.206 killing process with pid 83174 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83174' 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83174 00:20:19.206 [2024-12-05 19:40:12.603945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.206 19:40:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83174 00:20:19.815 [2024-12-05 19:40:12.959314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.752 ************************************ 00:20:20.752 END TEST raid5f_state_function_test 00:20:20.752 ************************************ 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:20.753 00:20:20.753 real 0m12.982s 00:20:20.753 user 0m21.472s 00:20:20.753 sys 0m1.824s 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.753 19:40:14 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:20:20.753 19:40:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:20.753 19:40:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.753 19:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.753 ************************************ 00:20:20.753 START TEST raid5f_state_function_test_sb 00:20:20.753 ************************************ 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:20.753 Process raid pid: 83857 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83857 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83857' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83857 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83857 ']' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.753 19:40:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.012 [2024-12-05 19:40:14.202437] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:21.012 [2024-12-05 19:40:14.202638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.012 [2024-12-05 19:40:14.400889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.270 [2024-12-05 19:40:14.582376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.528 [2024-12-05 19:40:14.803730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.528 [2024-12-05 19:40:14.803784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.801 [2024-12-05 19:40:15.213822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.801 [2024-12-05 19:40:15.213889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.801 [2024-12-05 19:40:15.213906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:21.801 [2024-12-05 19:40:15.213923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:21.801 [2024-12-05 19:40:15.213933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:21.801 [2024-12-05 19:40:15.213948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:21.801 [2024-12-05 19:40:15.213959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:21.801 [2024-12-05 19:40:15.213973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.801 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.061 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.061 "name": "Existed_Raid", 00:20:22.061 "uuid": "8d25c49d-b434-4906-be75-94d66ed70554", 00:20:22.061 "strip_size_kb": 64, 00:20:22.061 "state": "configuring", 00:20:22.061 "raid_level": "raid5f", 00:20:22.061 "superblock": true, 00:20:22.061 "num_base_bdevs": 4, 00:20:22.061 "num_base_bdevs_discovered": 0, 00:20:22.061 "num_base_bdevs_operational": 4, 00:20:22.061 "base_bdevs_list": [ 00:20:22.061 { 00:20:22.061 "name": "BaseBdev1", 00:20:22.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.061 "is_configured": false, 00:20:22.061 "data_offset": 0, 00:20:22.061 "data_size": 0 00:20:22.061 }, 00:20:22.061 { 00:20:22.061 "name": "BaseBdev2", 00:20:22.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.061 "is_configured": false, 00:20:22.061 "data_offset": 0, 00:20:22.061 "data_size": 0 00:20:22.062 }, 00:20:22.062 { 00:20:22.062 "name": "BaseBdev3", 00:20:22.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.062 "is_configured": false, 00:20:22.062 "data_offset": 0, 00:20:22.062 "data_size": 0 00:20:22.062 }, 00:20:22.062 { 00:20:22.062 "name": "BaseBdev4", 00:20:22.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.062 "is_configured": false, 00:20:22.062 "data_offset": 0, 00:20:22.062 "data_size": 0 00:20:22.062 } 00:20:22.062 ] 00:20:22.062 }' 00:20:22.062 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.062 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.319 [2024-12-05 19:40:15.717892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.319 [2024-12-05 19:40:15.718099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.319 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.319 [2024-12-05 19:40:15.729933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.319 [2024-12-05 19:40:15.730107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.320 [2024-12-05 19:40:15.730224] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.320 [2024-12-05 19:40:15.730284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.320 [2024-12-05 19:40:15.730491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.320 [2024-12-05 19:40:15.730562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.320 [2024-12-05 19:40:15.730681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:22.320 [2024-12-05 19:40:15.730773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:22.320 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.320 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.320 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.320 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.577 [2024-12-05 19:40:15.776019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.577 BaseBdev1 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.577 [ 00:20:22.577 { 00:20:22.577 "name": "BaseBdev1", 00:20:22.577 "aliases": [ 00:20:22.577 "65ee5461-69a3-4555-ab76-b2f1a4653e05" 00:20:22.577 ], 00:20:22.577 "product_name": "Malloc disk", 00:20:22.577 "block_size": 512, 00:20:22.577 "num_blocks": 65536, 00:20:22.577 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:22.577 "assigned_rate_limits": { 00:20:22.577 "rw_ios_per_sec": 0, 00:20:22.577 "rw_mbytes_per_sec": 0, 00:20:22.577 "r_mbytes_per_sec": 0, 00:20:22.577 "w_mbytes_per_sec": 0 00:20:22.577 }, 00:20:22.577 "claimed": true, 00:20:22.577 "claim_type": "exclusive_write", 00:20:22.577 "zoned": false, 00:20:22.577 "supported_io_types": { 00:20:22.577 "read": true, 00:20:22.577 "write": true, 00:20:22.577 "unmap": true, 00:20:22.577 "flush": true, 00:20:22.577 "reset": true, 00:20:22.577 "nvme_admin": false, 00:20:22.577 "nvme_io": false, 00:20:22.577 "nvme_io_md": false, 00:20:22.577 "write_zeroes": true, 00:20:22.577 "zcopy": true, 00:20:22.577 "get_zone_info": false, 00:20:22.577 "zone_management": false, 00:20:22.577 "zone_append": false, 00:20:22.577 "compare": false, 00:20:22.577 "compare_and_write": false, 00:20:22.577 "abort": true, 00:20:22.577 "seek_hole": false, 00:20:22.577 "seek_data": false, 00:20:22.577 "copy": true, 00:20:22.577 "nvme_iov_md": false 00:20:22.577 }, 00:20:22.577 "memory_domains": [ 00:20:22.577 { 00:20:22.577 "dma_device_id": "system", 00:20:22.577 "dma_device_type": 1 00:20:22.577 }, 00:20:22.577 { 00:20:22.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.577 "dma_device_type": 2 00:20:22.577 } 00:20:22.577 ], 00:20:22.577 "driver_specific": {} 00:20:22.577 } 00:20:22.577 ] 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.577 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.578 "name": "Existed_Raid", 00:20:22.578 "uuid": "1d9881f1-5479-4d3e-8e8a-3b59bd952d15", 00:20:22.578 "strip_size_kb": 64, 00:20:22.578 "state": "configuring", 00:20:22.578 "raid_level": "raid5f", 00:20:22.578 "superblock": true, 00:20:22.578 "num_base_bdevs": 4, 00:20:22.578 "num_base_bdevs_discovered": 1, 00:20:22.578 "num_base_bdevs_operational": 4, 00:20:22.578 "base_bdevs_list": [ 00:20:22.578 { 00:20:22.578 "name": "BaseBdev1", 00:20:22.578 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:22.578 "is_configured": true, 00:20:22.578 "data_offset": 2048, 00:20:22.578 "data_size": 63488 00:20:22.578 }, 00:20:22.578 { 00:20:22.578 "name": "BaseBdev2", 00:20:22.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.578 "is_configured": false, 00:20:22.578 "data_offset": 0, 00:20:22.578 "data_size": 0 00:20:22.578 }, 00:20:22.578 { 00:20:22.578 "name": "BaseBdev3", 00:20:22.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.578 "is_configured": false, 00:20:22.578 "data_offset": 0, 00:20:22.578 "data_size": 0 00:20:22.578 }, 00:20:22.578 { 00:20:22.578 "name": "BaseBdev4", 00:20:22.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.578 "is_configured": false, 00:20:22.578 "data_offset": 0, 00:20:22.578 "data_size": 0 00:20:22.578 } 00:20:22.578 ] 00:20:22.578 }' 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.578 19:40:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.143 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.144 [2024-12-05 19:40:16.336224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.144 [2024-12-05 19:40:16.336455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.144 [2024-12-05 19:40:16.344276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.144 [2024-12-05 19:40:16.346778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.144 [2024-12-05 19:40:16.346959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.144 [2024-12-05 19:40:16.347077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:23.144 [2024-12-05 19:40:16.347233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:23.144 [2024-12-05 19:40:16.347375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:23.144 [2024-12-05 19:40:16.347439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.144 "name": "Existed_Raid", 00:20:23.144 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:23.144 "strip_size_kb": 64, 00:20:23.144 "state": "configuring", 00:20:23.144 "raid_level": "raid5f", 00:20:23.144 "superblock": true, 00:20:23.144 "num_base_bdevs": 4, 00:20:23.144 "num_base_bdevs_discovered": 1, 00:20:23.144 "num_base_bdevs_operational": 4, 00:20:23.144 "base_bdevs_list": [ 00:20:23.144 { 00:20:23.144 "name": "BaseBdev1", 00:20:23.144 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:23.144 "is_configured": true, 00:20:23.144 "data_offset": 2048, 00:20:23.144 "data_size": 63488 00:20:23.144 }, 00:20:23.144 { 00:20:23.144 "name": "BaseBdev2", 00:20:23.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.144 "is_configured": false, 00:20:23.144 "data_offset": 0, 00:20:23.144 "data_size": 0 00:20:23.144 }, 00:20:23.144 { 00:20:23.144 "name": "BaseBdev3", 00:20:23.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.144 "is_configured": false, 00:20:23.144 "data_offset": 0, 00:20:23.144 "data_size": 0 00:20:23.144 }, 00:20:23.144 { 00:20:23.144 "name": "BaseBdev4", 00:20:23.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.144 "is_configured": false, 00:20:23.144 "data_offset": 0, 00:20:23.144 "data_size": 0 00:20:23.144 } 00:20:23.144 ] 00:20:23.144 }' 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.144 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.710 [2024-12-05 19:40:16.892528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.710 BaseBdev2 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.710 [ 00:20:23.710 { 00:20:23.710 "name": "BaseBdev2", 00:20:23.710 "aliases": [ 00:20:23.710 "5d27ed95-080b-4c85-810f-8aca1f8db3cc" 00:20:23.710 ], 00:20:23.710 "product_name": "Malloc disk", 00:20:23.710 "block_size": 512, 00:20:23.710 "num_blocks": 65536, 00:20:23.710 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:23.710 "assigned_rate_limits": { 00:20:23.710 "rw_ios_per_sec": 0, 00:20:23.710 "rw_mbytes_per_sec": 0, 00:20:23.710 "r_mbytes_per_sec": 0, 00:20:23.710 "w_mbytes_per_sec": 0 00:20:23.710 }, 00:20:23.710 "claimed": true, 00:20:23.710 "claim_type": "exclusive_write", 00:20:23.710 "zoned": false, 00:20:23.710 "supported_io_types": { 00:20:23.710 "read": true, 00:20:23.710 "write": true, 00:20:23.710 "unmap": true, 00:20:23.710 "flush": true, 00:20:23.710 "reset": true, 00:20:23.710 "nvme_admin": false, 00:20:23.710 "nvme_io": false, 00:20:23.710 "nvme_io_md": false, 00:20:23.710 "write_zeroes": true, 00:20:23.710 "zcopy": true, 00:20:23.710 "get_zone_info": false, 00:20:23.710 "zone_management": false, 00:20:23.710 "zone_append": false, 00:20:23.710 "compare": false, 00:20:23.710 "compare_and_write": false, 00:20:23.710 "abort": true, 00:20:23.710 "seek_hole": false, 00:20:23.710 "seek_data": false, 00:20:23.710 "copy": true, 00:20:23.710 "nvme_iov_md": false 00:20:23.710 }, 00:20:23.710 "memory_domains": [ 00:20:23.710 { 00:20:23.710 "dma_device_id": "system", 00:20:23.710 "dma_device_type": 1 00:20:23.710 }, 00:20:23.710 { 00:20:23.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.710 "dma_device_type": 2 00:20:23.710 } 00:20:23.710 ], 00:20:23.710 "driver_specific": {} 00:20:23.710 } 00:20:23.710 ] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.710 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.710 "name": "Existed_Raid", 00:20:23.711 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:23.711 "strip_size_kb": 64, 00:20:23.711 "state": "configuring", 00:20:23.711 "raid_level": "raid5f", 00:20:23.711 "superblock": true, 00:20:23.711 "num_base_bdevs": 4, 00:20:23.711 "num_base_bdevs_discovered": 2, 00:20:23.711 "num_base_bdevs_operational": 4, 00:20:23.711 "base_bdevs_list": [ 00:20:23.711 { 00:20:23.711 "name": "BaseBdev1", 00:20:23.711 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:23.711 "is_configured": true, 00:20:23.711 "data_offset": 2048, 00:20:23.711 "data_size": 63488 00:20:23.711 }, 00:20:23.711 { 00:20:23.711 "name": "BaseBdev2", 00:20:23.711 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:23.711 "is_configured": true, 00:20:23.711 "data_offset": 2048, 00:20:23.711 "data_size": 63488 00:20:23.711 }, 00:20:23.711 { 00:20:23.711 "name": "BaseBdev3", 00:20:23.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.711 "is_configured": false, 00:20:23.711 "data_offset": 0, 00:20:23.711 "data_size": 0 00:20:23.711 }, 00:20:23.711 { 00:20:23.711 "name": "BaseBdev4", 00:20:23.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.711 "is_configured": false, 00:20:23.711 "data_offset": 0, 00:20:23.711 "data_size": 0 00:20:23.711 } 00:20:23.711 ] 00:20:23.711 }' 00:20:23.711 19:40:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.711 19:40:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.279 [2024-12-05 19:40:17.504005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.279 BaseBdev3 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.279 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.279 [ 00:20:24.279 { 00:20:24.279 "name": "BaseBdev3", 00:20:24.279 "aliases": [ 00:20:24.279 "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8" 00:20:24.279 ], 00:20:24.279 "product_name": "Malloc disk", 00:20:24.279 "block_size": 512, 00:20:24.279 "num_blocks": 65536, 00:20:24.279 "uuid": "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8", 00:20:24.279 "assigned_rate_limits": { 00:20:24.279 "rw_ios_per_sec": 0, 00:20:24.279 "rw_mbytes_per_sec": 0, 00:20:24.279 "r_mbytes_per_sec": 0, 00:20:24.279 "w_mbytes_per_sec": 0 00:20:24.279 }, 00:20:24.279 "claimed": true, 00:20:24.279 "claim_type": "exclusive_write", 00:20:24.279 "zoned": false, 00:20:24.279 "supported_io_types": { 00:20:24.279 "read": true, 00:20:24.279 "write": true, 00:20:24.279 "unmap": true, 00:20:24.279 "flush": true, 00:20:24.279 "reset": true, 00:20:24.279 "nvme_admin": false, 00:20:24.279 "nvme_io": false, 00:20:24.279 "nvme_io_md": false, 00:20:24.279 "write_zeroes": true, 00:20:24.279 "zcopy": true, 00:20:24.279 "get_zone_info": false, 00:20:24.279 "zone_management": false, 00:20:24.279 "zone_append": false, 00:20:24.279 "compare": false, 00:20:24.279 "compare_and_write": false, 00:20:24.279 "abort": true, 00:20:24.279 "seek_hole": false, 00:20:24.279 "seek_data": false, 00:20:24.279 "copy": true, 00:20:24.279 "nvme_iov_md": false 00:20:24.279 }, 00:20:24.279 "memory_domains": [ 00:20:24.279 { 00:20:24.279 "dma_device_id": "system", 00:20:24.279 "dma_device_type": 1 00:20:24.279 }, 00:20:24.279 { 00:20:24.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.279 "dma_device_type": 2 00:20:24.279 } 00:20:24.279 ], 00:20:24.279 "driver_specific": {} 00:20:24.279 } 00:20:24.279 ] 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.280 "name": "Existed_Raid", 00:20:24.280 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:24.280 "strip_size_kb": 64, 00:20:24.280 "state": "configuring", 00:20:24.280 "raid_level": "raid5f", 00:20:24.280 "superblock": true, 00:20:24.280 "num_base_bdevs": 4, 00:20:24.280 "num_base_bdevs_discovered": 3, 00:20:24.280 "num_base_bdevs_operational": 4, 00:20:24.280 "base_bdevs_list": [ 00:20:24.280 { 00:20:24.280 "name": "BaseBdev1", 00:20:24.280 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:24.280 "is_configured": true, 00:20:24.280 "data_offset": 2048, 00:20:24.280 "data_size": 63488 00:20:24.280 }, 00:20:24.280 { 00:20:24.280 "name": "BaseBdev2", 00:20:24.280 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:24.280 "is_configured": true, 00:20:24.280 "data_offset": 2048, 00:20:24.280 "data_size": 63488 00:20:24.280 }, 00:20:24.280 { 00:20:24.280 "name": "BaseBdev3", 00:20:24.280 "uuid": "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8", 00:20:24.280 "is_configured": true, 00:20:24.280 "data_offset": 2048, 00:20:24.280 "data_size": 63488 00:20:24.280 }, 00:20:24.280 { 00:20:24.280 "name": "BaseBdev4", 00:20:24.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.280 "is_configured": false, 00:20:24.280 "data_offset": 0, 00:20:24.280 "data_size": 0 00:20:24.280 } 00:20:24.280 ] 00:20:24.280 }' 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.280 19:40:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.848 [2024-12-05 19:40:18.105435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.848 [2024-12-05 19:40:18.105851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:24.848 [2024-12-05 19:40:18.105871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.848 BaseBdev4 00:20:24.848 [2024-12-05 19:40:18.106199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.848 [2024-12-05 19:40:18.113272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:24.848 [2024-12-05 19:40:18.113449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:24.848 [2024-12-05 19:40:18.113916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.848 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.848 [ 00:20:24.848 { 00:20:24.848 "name": "BaseBdev4", 00:20:24.848 "aliases": [ 00:20:24.848 "5da90697-a2e5-4558-a0cb-6b10f8917e5d" 00:20:24.848 ], 00:20:24.848 "product_name": "Malloc disk", 00:20:24.848 "block_size": 512, 00:20:24.848 "num_blocks": 65536, 00:20:24.848 "uuid": "5da90697-a2e5-4558-a0cb-6b10f8917e5d", 00:20:24.848 "assigned_rate_limits": { 00:20:24.848 "rw_ios_per_sec": 0, 00:20:24.848 "rw_mbytes_per_sec": 0, 00:20:24.848 "r_mbytes_per_sec": 0, 00:20:24.848 "w_mbytes_per_sec": 0 00:20:24.848 }, 00:20:24.848 "claimed": true, 00:20:24.848 "claim_type": "exclusive_write", 00:20:24.848 "zoned": false, 00:20:24.848 "supported_io_types": { 00:20:24.848 "read": true, 00:20:24.848 "write": true, 00:20:24.848 "unmap": true, 00:20:24.848 "flush": true, 00:20:24.848 "reset": true, 00:20:24.848 "nvme_admin": false, 00:20:24.848 "nvme_io": false, 00:20:24.848 "nvme_io_md": false, 00:20:24.848 "write_zeroes": true, 00:20:24.848 "zcopy": true, 00:20:24.848 "get_zone_info": false, 00:20:24.848 "zone_management": false, 00:20:24.848 "zone_append": false, 00:20:24.848 "compare": false, 00:20:24.848 "compare_and_write": false, 00:20:24.848 "abort": true, 00:20:24.848 "seek_hole": false, 00:20:24.848 "seek_data": false, 00:20:24.848 "copy": true, 00:20:24.848 "nvme_iov_md": false 00:20:24.848 }, 00:20:24.848 "memory_domains": [ 00:20:24.848 { 00:20:24.848 "dma_device_id": "system", 00:20:24.848 "dma_device_type": 1 00:20:24.848 }, 00:20:24.848 { 00:20:24.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.848 "dma_device_type": 2 00:20:24.848 } 00:20:24.848 ], 00:20:24.849 "driver_specific": {} 00:20:24.849 } 00:20:24.849 ] 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.849 "name": "Existed_Raid", 00:20:24.849 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:24.849 "strip_size_kb": 64, 00:20:24.849 "state": "online", 00:20:24.849 "raid_level": "raid5f", 00:20:24.849 "superblock": true, 00:20:24.849 "num_base_bdevs": 4, 00:20:24.849 "num_base_bdevs_discovered": 4, 00:20:24.849 "num_base_bdevs_operational": 4, 00:20:24.849 "base_bdevs_list": [ 00:20:24.849 { 00:20:24.849 "name": "BaseBdev1", 00:20:24.849 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:24.849 "is_configured": true, 00:20:24.849 "data_offset": 2048, 00:20:24.849 "data_size": 63488 00:20:24.849 }, 00:20:24.849 { 00:20:24.849 "name": "BaseBdev2", 00:20:24.849 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:24.849 "is_configured": true, 00:20:24.849 "data_offset": 2048, 00:20:24.849 "data_size": 63488 00:20:24.849 }, 00:20:24.849 { 00:20:24.849 "name": "BaseBdev3", 00:20:24.849 "uuid": "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8", 00:20:24.849 "is_configured": true, 00:20:24.849 "data_offset": 2048, 00:20:24.849 "data_size": 63488 00:20:24.849 }, 00:20:24.849 { 00:20:24.849 "name": "BaseBdev4", 00:20:24.849 "uuid": "5da90697-a2e5-4558-a0cb-6b10f8917e5d", 00:20:24.849 "is_configured": true, 00:20:24.849 "data_offset": 2048, 00:20:24.849 "data_size": 63488 00:20:24.849 } 00:20:24.849 ] 00:20:24.849 }' 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.849 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.436 [2024-12-05 19:40:18.654121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.436 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:25.436 "name": "Existed_Raid", 00:20:25.436 "aliases": [ 00:20:25.436 "f74907aa-8de9-448f-91ee-6d1b6a29a7b9" 00:20:25.436 ], 00:20:25.436 "product_name": "Raid Volume", 00:20:25.436 "block_size": 512, 00:20:25.436 "num_blocks": 190464, 00:20:25.436 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:25.436 "assigned_rate_limits": { 00:20:25.436 "rw_ios_per_sec": 0, 00:20:25.436 "rw_mbytes_per_sec": 0, 00:20:25.436 "r_mbytes_per_sec": 0, 00:20:25.436 "w_mbytes_per_sec": 0 00:20:25.436 }, 00:20:25.436 "claimed": false, 00:20:25.436 "zoned": false, 00:20:25.436 "supported_io_types": { 00:20:25.436 "read": true, 00:20:25.436 "write": true, 00:20:25.436 "unmap": false, 00:20:25.436 "flush": false, 00:20:25.436 "reset": true, 00:20:25.436 "nvme_admin": false, 00:20:25.436 "nvme_io": false, 00:20:25.436 "nvme_io_md": false, 00:20:25.436 "write_zeroes": true, 00:20:25.436 "zcopy": false, 00:20:25.436 "get_zone_info": false, 00:20:25.436 "zone_management": false, 00:20:25.436 "zone_append": false, 00:20:25.436 "compare": false, 00:20:25.436 "compare_and_write": false, 00:20:25.436 "abort": false, 00:20:25.436 "seek_hole": false, 00:20:25.436 "seek_data": false, 00:20:25.436 "copy": false, 00:20:25.436 "nvme_iov_md": false 00:20:25.436 }, 00:20:25.436 "driver_specific": { 00:20:25.436 "raid": { 00:20:25.436 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:25.436 "strip_size_kb": 64, 00:20:25.436 "state": "online", 00:20:25.436 "raid_level": "raid5f", 00:20:25.436 "superblock": true, 00:20:25.436 "num_base_bdevs": 4, 00:20:25.436 "num_base_bdevs_discovered": 4, 00:20:25.436 "num_base_bdevs_operational": 4, 00:20:25.436 "base_bdevs_list": [ 00:20:25.436 { 00:20:25.436 "name": "BaseBdev1", 00:20:25.436 "uuid": "65ee5461-69a3-4555-ab76-b2f1a4653e05", 00:20:25.436 "is_configured": true, 00:20:25.436 "data_offset": 2048, 00:20:25.436 "data_size": 63488 00:20:25.436 }, 00:20:25.436 { 00:20:25.436 "name": "BaseBdev2", 00:20:25.436 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:25.436 "is_configured": true, 00:20:25.436 "data_offset": 2048, 00:20:25.436 "data_size": 63488 00:20:25.436 }, 00:20:25.436 { 00:20:25.436 "name": "BaseBdev3", 00:20:25.436 "uuid": "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8", 00:20:25.436 "is_configured": true, 00:20:25.437 "data_offset": 2048, 00:20:25.437 "data_size": 63488 00:20:25.437 }, 00:20:25.437 { 00:20:25.437 "name": "BaseBdev4", 00:20:25.437 "uuid": "5da90697-a2e5-4558-a0cb-6b10f8917e5d", 00:20:25.437 "is_configured": true, 00:20:25.437 "data_offset": 2048, 00:20:25.437 "data_size": 63488 00:20:25.437 } 00:20:25.437 ] 00:20:25.437 } 00:20:25.437 } 00:20:25.437 }' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:25.437 BaseBdev2 00:20:25.437 BaseBdev3 00:20:25.437 BaseBdev4' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.437 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.695 19:40:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.695 [2024-12-05 19:40:19.034018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.695 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.696 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.955 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.955 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.955 "name": "Existed_Raid", 00:20:25.955 "uuid": "f74907aa-8de9-448f-91ee-6d1b6a29a7b9", 00:20:25.955 "strip_size_kb": 64, 00:20:25.955 "state": "online", 00:20:25.955 "raid_level": "raid5f", 00:20:25.955 "superblock": true, 00:20:25.955 "num_base_bdevs": 4, 00:20:25.955 "num_base_bdevs_discovered": 3, 00:20:25.955 "num_base_bdevs_operational": 3, 00:20:25.955 "base_bdevs_list": [ 00:20:25.955 { 00:20:25.955 "name": null, 00:20:25.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.955 "is_configured": false, 00:20:25.955 "data_offset": 0, 00:20:25.955 "data_size": 63488 00:20:25.955 }, 00:20:25.955 { 00:20:25.955 "name": "BaseBdev2", 00:20:25.955 "uuid": "5d27ed95-080b-4c85-810f-8aca1f8db3cc", 00:20:25.955 "is_configured": true, 00:20:25.955 "data_offset": 2048, 00:20:25.955 "data_size": 63488 00:20:25.955 }, 00:20:25.955 { 00:20:25.955 "name": "BaseBdev3", 00:20:25.955 "uuid": "30dfa8f3-8a4a-47a1-9279-5b7a73b5e9b8", 00:20:25.955 "is_configured": true, 00:20:25.955 "data_offset": 2048, 00:20:25.955 "data_size": 63488 00:20:25.955 }, 00:20:25.955 { 00:20:25.955 "name": "BaseBdev4", 00:20:25.955 "uuid": "5da90697-a2e5-4558-a0cb-6b10f8917e5d", 00:20:25.955 "is_configured": true, 00:20:25.955 "data_offset": 2048, 00:20:25.955 "data_size": 63488 00:20:25.955 } 00:20:25.955 ] 00:20:25.955 }' 00:20:25.955 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.955 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.523 [2024-12-05 19:40:19.728983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:26.523 [2024-12-05 19:40:19.729356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.523 [2024-12-05 19:40:19.815470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.523 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.523 [2024-12-05 19:40:19.875539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.781 19:40:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.781 [2024-12-05 19:40:20.025514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:26.781 [2024-12-05 19:40:20.025764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:26.781 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.782 BaseBdev2 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.782 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.040 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.040 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.040 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.040 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.040 [ 00:20:27.040 { 00:20:27.040 "name": "BaseBdev2", 00:20:27.040 "aliases": [ 00:20:27.040 "5038360e-a65c-4953-ab5d-bffe98bd145c" 00:20:27.040 ], 00:20:27.040 "product_name": "Malloc disk", 00:20:27.040 "block_size": 512, 00:20:27.040 "num_blocks": 65536, 00:20:27.040 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:27.040 "assigned_rate_limits": { 00:20:27.040 "rw_ios_per_sec": 0, 00:20:27.040 "rw_mbytes_per_sec": 0, 00:20:27.040 "r_mbytes_per_sec": 0, 00:20:27.040 "w_mbytes_per_sec": 0 00:20:27.040 }, 00:20:27.040 "claimed": false, 00:20:27.040 "zoned": false, 00:20:27.040 "supported_io_types": { 00:20:27.040 "read": true, 00:20:27.040 "write": true, 00:20:27.040 "unmap": true, 00:20:27.040 "flush": true, 00:20:27.040 "reset": true, 00:20:27.040 "nvme_admin": false, 00:20:27.040 "nvme_io": false, 00:20:27.040 "nvme_io_md": false, 00:20:27.040 "write_zeroes": true, 00:20:27.040 "zcopy": true, 00:20:27.040 "get_zone_info": false, 00:20:27.040 "zone_management": false, 00:20:27.040 "zone_append": false, 00:20:27.040 "compare": false, 00:20:27.040 "compare_and_write": false, 00:20:27.040 "abort": true, 00:20:27.041 "seek_hole": false, 00:20:27.041 "seek_data": false, 00:20:27.041 "copy": true, 00:20:27.041 "nvme_iov_md": false 00:20:27.041 }, 00:20:27.041 "memory_domains": [ 00:20:27.041 { 00:20:27.041 "dma_device_id": "system", 00:20:27.041 "dma_device_type": 1 00:20:27.041 }, 00:20:27.041 { 00:20:27.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.041 "dma_device_type": 2 00:20:27.041 } 00:20:27.041 ], 00:20:27.041 "driver_specific": {} 00:20:27.041 } 00:20:27.041 ] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 BaseBdev3 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 [ 00:20:27.041 { 00:20:27.041 "name": "BaseBdev3", 00:20:27.041 "aliases": [ 00:20:27.041 "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef" 00:20:27.041 ], 00:20:27.041 "product_name": "Malloc disk", 00:20:27.041 "block_size": 512, 00:20:27.041 "num_blocks": 65536, 00:20:27.041 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:27.041 "assigned_rate_limits": { 00:20:27.041 "rw_ios_per_sec": 0, 00:20:27.041 "rw_mbytes_per_sec": 0, 00:20:27.041 "r_mbytes_per_sec": 0, 00:20:27.041 "w_mbytes_per_sec": 0 00:20:27.041 }, 00:20:27.041 "claimed": false, 00:20:27.041 "zoned": false, 00:20:27.041 "supported_io_types": { 00:20:27.041 "read": true, 00:20:27.041 "write": true, 00:20:27.041 "unmap": true, 00:20:27.041 "flush": true, 00:20:27.041 "reset": true, 00:20:27.041 "nvme_admin": false, 00:20:27.041 "nvme_io": false, 00:20:27.041 "nvme_io_md": false, 00:20:27.041 "write_zeroes": true, 00:20:27.041 "zcopy": true, 00:20:27.041 "get_zone_info": false, 00:20:27.041 "zone_management": false, 00:20:27.041 "zone_append": false, 00:20:27.041 "compare": false, 00:20:27.041 "compare_and_write": false, 00:20:27.041 "abort": true, 00:20:27.041 "seek_hole": false, 00:20:27.041 "seek_data": false, 00:20:27.041 "copy": true, 00:20:27.041 "nvme_iov_md": false 00:20:27.041 }, 00:20:27.041 "memory_domains": [ 00:20:27.041 { 00:20:27.041 "dma_device_id": "system", 00:20:27.041 "dma_device_type": 1 00:20:27.041 }, 00:20:27.041 { 00:20:27.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.041 "dma_device_type": 2 00:20:27.041 } 00:20:27.041 ], 00:20:27.041 "driver_specific": {} 00:20:27.041 } 00:20:27.041 ] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 BaseBdev4 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 [ 00:20:27.041 { 00:20:27.041 "name": "BaseBdev4", 00:20:27.041 "aliases": [ 00:20:27.041 "44d3effc-472e-477c-b74a-1791cb84a4ec" 00:20:27.041 ], 00:20:27.041 "product_name": "Malloc disk", 00:20:27.041 "block_size": 512, 00:20:27.041 "num_blocks": 65536, 00:20:27.041 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:27.041 "assigned_rate_limits": { 00:20:27.041 "rw_ios_per_sec": 0, 00:20:27.041 "rw_mbytes_per_sec": 0, 00:20:27.041 "r_mbytes_per_sec": 0, 00:20:27.041 "w_mbytes_per_sec": 0 00:20:27.041 }, 00:20:27.041 "claimed": false, 00:20:27.041 "zoned": false, 00:20:27.041 "supported_io_types": { 00:20:27.041 "read": true, 00:20:27.041 "write": true, 00:20:27.041 "unmap": true, 00:20:27.041 "flush": true, 00:20:27.041 "reset": true, 00:20:27.041 "nvme_admin": false, 00:20:27.041 "nvme_io": false, 00:20:27.041 "nvme_io_md": false, 00:20:27.041 "write_zeroes": true, 00:20:27.041 "zcopy": true, 00:20:27.041 "get_zone_info": false, 00:20:27.041 "zone_management": false, 00:20:27.041 "zone_append": false, 00:20:27.041 "compare": false, 00:20:27.041 "compare_and_write": false, 00:20:27.041 "abort": true, 00:20:27.041 "seek_hole": false, 00:20:27.041 "seek_data": false, 00:20:27.041 "copy": true, 00:20:27.041 "nvme_iov_md": false 00:20:27.041 }, 00:20:27.041 "memory_domains": [ 00:20:27.041 { 00:20:27.041 "dma_device_id": "system", 00:20:27.041 "dma_device_type": 1 00:20:27.041 }, 00:20:27.041 { 00:20:27.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.041 "dma_device_type": 2 00:20:27.041 } 00:20:27.041 ], 00:20:27.041 "driver_specific": {} 00:20:27.041 } 00:20:27.041 ] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.041 [2024-12-05 19:40:20.400583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.041 [2024-12-05 19:40:20.400768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.041 [2024-12-05 19:40:20.400919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.041 [2024-12-05 19:40:20.403302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.041 [2024-12-05 19:40:20.403491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.041 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.042 "name": "Existed_Raid", 00:20:27.042 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:27.042 "strip_size_kb": 64, 00:20:27.042 "state": "configuring", 00:20:27.042 "raid_level": "raid5f", 00:20:27.042 "superblock": true, 00:20:27.042 "num_base_bdevs": 4, 00:20:27.042 "num_base_bdevs_discovered": 3, 00:20:27.042 "num_base_bdevs_operational": 4, 00:20:27.042 "base_bdevs_list": [ 00:20:27.042 { 00:20:27.042 "name": "BaseBdev1", 00:20:27.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.042 "is_configured": false, 00:20:27.042 "data_offset": 0, 00:20:27.042 "data_size": 0 00:20:27.042 }, 00:20:27.042 { 00:20:27.042 "name": "BaseBdev2", 00:20:27.042 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:27.042 "is_configured": true, 00:20:27.042 "data_offset": 2048, 00:20:27.042 "data_size": 63488 00:20:27.042 }, 00:20:27.042 { 00:20:27.042 "name": "BaseBdev3", 00:20:27.042 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:27.042 "is_configured": true, 00:20:27.042 "data_offset": 2048, 00:20:27.042 "data_size": 63488 00:20:27.042 }, 00:20:27.042 { 00:20:27.042 "name": "BaseBdev4", 00:20:27.042 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:27.042 "is_configured": true, 00:20:27.042 "data_offset": 2048, 00:20:27.042 "data_size": 63488 00:20:27.042 } 00:20:27.042 ] 00:20:27.042 }' 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.042 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 [2024-12-05 19:40:20.944781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.608 19:40:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.608 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.608 "name": "Existed_Raid", 00:20:27.608 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:27.608 "strip_size_kb": 64, 00:20:27.608 "state": "configuring", 00:20:27.608 "raid_level": "raid5f", 00:20:27.608 "superblock": true, 00:20:27.608 "num_base_bdevs": 4, 00:20:27.608 "num_base_bdevs_discovered": 2, 00:20:27.608 "num_base_bdevs_operational": 4, 00:20:27.608 "base_bdevs_list": [ 00:20:27.608 { 00:20:27.608 "name": "BaseBdev1", 00:20:27.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.608 "is_configured": false, 00:20:27.608 "data_offset": 0, 00:20:27.608 "data_size": 0 00:20:27.608 }, 00:20:27.608 { 00:20:27.608 "name": null, 00:20:27.608 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:27.608 "is_configured": false, 00:20:27.608 "data_offset": 0, 00:20:27.608 "data_size": 63488 00:20:27.608 }, 00:20:27.608 { 00:20:27.608 "name": "BaseBdev3", 00:20:27.608 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:27.608 "is_configured": true, 00:20:27.608 "data_offset": 2048, 00:20:27.608 "data_size": 63488 00:20:27.608 }, 00:20:27.608 { 00:20:27.608 "name": "BaseBdev4", 00:20:27.608 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:27.608 "is_configured": true, 00:20:27.608 "data_offset": 2048, 00:20:27.608 "data_size": 63488 00:20:27.608 } 00:20:27.608 ] 00:20:27.608 }' 00:20:27.608 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.608 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.175 BaseBdev1 00:20:28.175 [2024-12-05 19:40:21.576119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.175 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.175 [ 00:20:28.175 { 00:20:28.175 "name": "BaseBdev1", 00:20:28.175 "aliases": [ 00:20:28.175 "748f296d-58bc-4273-8caf-dd9f07131dbc" 00:20:28.175 ], 00:20:28.175 "product_name": "Malloc disk", 00:20:28.175 "block_size": 512, 00:20:28.175 "num_blocks": 65536, 00:20:28.175 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:28.175 "assigned_rate_limits": { 00:20:28.175 "rw_ios_per_sec": 0, 00:20:28.175 "rw_mbytes_per_sec": 0, 00:20:28.175 "r_mbytes_per_sec": 0, 00:20:28.175 "w_mbytes_per_sec": 0 00:20:28.175 }, 00:20:28.176 "claimed": true, 00:20:28.176 "claim_type": "exclusive_write", 00:20:28.176 "zoned": false, 00:20:28.176 "supported_io_types": { 00:20:28.176 "read": true, 00:20:28.176 "write": true, 00:20:28.176 "unmap": true, 00:20:28.176 "flush": true, 00:20:28.176 "reset": true, 00:20:28.176 "nvme_admin": false, 00:20:28.176 "nvme_io": false, 00:20:28.176 "nvme_io_md": false, 00:20:28.176 "write_zeroes": true, 00:20:28.176 "zcopy": true, 00:20:28.176 "get_zone_info": false, 00:20:28.176 "zone_management": false, 00:20:28.176 "zone_append": false, 00:20:28.176 "compare": false, 00:20:28.176 "compare_and_write": false, 00:20:28.176 "abort": true, 00:20:28.176 "seek_hole": false, 00:20:28.176 "seek_data": false, 00:20:28.176 "copy": true, 00:20:28.176 "nvme_iov_md": false 00:20:28.176 }, 00:20:28.176 "memory_domains": [ 00:20:28.176 { 00:20:28.176 "dma_device_id": "system", 00:20:28.176 "dma_device_type": 1 00:20:28.176 }, 00:20:28.176 { 00:20:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.176 "dma_device_type": 2 00:20:28.176 } 00:20:28.176 ], 00:20:28.176 "driver_specific": {} 00:20:28.176 } 00:20:28.176 ] 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.176 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.433 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.433 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.433 "name": "Existed_Raid", 00:20:28.433 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:28.433 "strip_size_kb": 64, 00:20:28.433 "state": "configuring", 00:20:28.433 "raid_level": "raid5f", 00:20:28.433 "superblock": true, 00:20:28.433 "num_base_bdevs": 4, 00:20:28.433 "num_base_bdevs_discovered": 3, 00:20:28.433 "num_base_bdevs_operational": 4, 00:20:28.433 "base_bdevs_list": [ 00:20:28.433 { 00:20:28.433 "name": "BaseBdev1", 00:20:28.433 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:28.433 "is_configured": true, 00:20:28.433 "data_offset": 2048, 00:20:28.433 "data_size": 63488 00:20:28.433 }, 00:20:28.433 { 00:20:28.433 "name": null, 00:20:28.433 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:28.433 "is_configured": false, 00:20:28.433 "data_offset": 0, 00:20:28.433 "data_size": 63488 00:20:28.433 }, 00:20:28.433 { 00:20:28.433 "name": "BaseBdev3", 00:20:28.433 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:28.433 "is_configured": true, 00:20:28.433 "data_offset": 2048, 00:20:28.433 "data_size": 63488 00:20:28.433 }, 00:20:28.433 { 00:20:28.433 "name": "BaseBdev4", 00:20:28.433 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:28.433 "is_configured": true, 00:20:28.433 "data_offset": 2048, 00:20:28.433 "data_size": 63488 00:20:28.433 } 00:20:28.433 ] 00:20:28.433 }' 00:20:28.433 19:40:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.434 19:40:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.691 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:28.691 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.691 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.691 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.949 [2024-12-05 19:40:22.184362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.949 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.949 "name": "Existed_Raid", 00:20:28.949 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:28.950 "strip_size_kb": 64, 00:20:28.950 "state": "configuring", 00:20:28.950 "raid_level": "raid5f", 00:20:28.950 "superblock": true, 00:20:28.950 "num_base_bdevs": 4, 00:20:28.950 "num_base_bdevs_discovered": 2, 00:20:28.950 "num_base_bdevs_operational": 4, 00:20:28.950 "base_bdevs_list": [ 00:20:28.950 { 00:20:28.950 "name": "BaseBdev1", 00:20:28.950 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:28.950 "is_configured": true, 00:20:28.950 "data_offset": 2048, 00:20:28.950 "data_size": 63488 00:20:28.950 }, 00:20:28.950 { 00:20:28.950 "name": null, 00:20:28.950 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:28.950 "is_configured": false, 00:20:28.950 "data_offset": 0, 00:20:28.950 "data_size": 63488 00:20:28.950 }, 00:20:28.950 { 00:20:28.950 "name": null, 00:20:28.950 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:28.950 "is_configured": false, 00:20:28.950 "data_offset": 0, 00:20:28.950 "data_size": 63488 00:20:28.950 }, 00:20:28.950 { 00:20:28.950 "name": "BaseBdev4", 00:20:28.950 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:28.950 "is_configured": true, 00:20:28.950 "data_offset": 2048, 00:20:28.950 "data_size": 63488 00:20:28.950 } 00:20:28.950 ] 00:20:28.950 }' 00:20:28.950 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.950 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.515 [2024-12-05 19:40:22.784573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.515 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.515 "name": "Existed_Raid", 00:20:29.515 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:29.515 "strip_size_kb": 64, 00:20:29.515 "state": "configuring", 00:20:29.515 "raid_level": "raid5f", 00:20:29.515 "superblock": true, 00:20:29.515 "num_base_bdevs": 4, 00:20:29.515 "num_base_bdevs_discovered": 3, 00:20:29.515 "num_base_bdevs_operational": 4, 00:20:29.515 "base_bdevs_list": [ 00:20:29.515 { 00:20:29.515 "name": "BaseBdev1", 00:20:29.515 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:29.515 "is_configured": true, 00:20:29.515 "data_offset": 2048, 00:20:29.515 "data_size": 63488 00:20:29.515 }, 00:20:29.515 { 00:20:29.515 "name": null, 00:20:29.515 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:29.515 "is_configured": false, 00:20:29.515 "data_offset": 0, 00:20:29.515 "data_size": 63488 00:20:29.515 }, 00:20:29.515 { 00:20:29.516 "name": "BaseBdev3", 00:20:29.516 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:29.516 "is_configured": true, 00:20:29.516 "data_offset": 2048, 00:20:29.516 "data_size": 63488 00:20:29.516 }, 00:20:29.516 { 00:20:29.516 "name": "BaseBdev4", 00:20:29.516 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:29.516 "is_configured": true, 00:20:29.516 "data_offset": 2048, 00:20:29.516 "data_size": 63488 00:20:29.516 } 00:20:29.516 ] 00:20:29.516 }' 00:20:29.516 19:40:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.516 19:40:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.081 [2024-12-05 19:40:23.364822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.081 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.081 "name": "Existed_Raid", 00:20:30.081 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:30.081 "strip_size_kb": 64, 00:20:30.081 "state": "configuring", 00:20:30.081 "raid_level": "raid5f", 00:20:30.082 "superblock": true, 00:20:30.082 "num_base_bdevs": 4, 00:20:30.082 "num_base_bdevs_discovered": 2, 00:20:30.082 "num_base_bdevs_operational": 4, 00:20:30.082 "base_bdevs_list": [ 00:20:30.082 { 00:20:30.082 "name": null, 00:20:30.082 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:30.082 "is_configured": false, 00:20:30.082 "data_offset": 0, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": null, 00:20:30.082 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:30.082 "is_configured": false, 00:20:30.082 "data_offset": 0, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": "BaseBdev3", 00:20:30.082 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": "BaseBdev4", 00:20:30.082 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 } 00:20:30.082 ] 00:20:30.082 }' 00:20:30.082 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.082 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.647 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.647 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 19:40:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:30.647 19:40:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 [2024-12-05 19:40:24.039377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.647 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.648 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.906 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.906 "name": "Existed_Raid", 00:20:30.906 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:30.906 "strip_size_kb": 64, 00:20:30.906 "state": "configuring", 00:20:30.906 "raid_level": "raid5f", 00:20:30.906 "superblock": true, 00:20:30.906 "num_base_bdevs": 4, 00:20:30.906 "num_base_bdevs_discovered": 3, 00:20:30.906 "num_base_bdevs_operational": 4, 00:20:30.906 "base_bdevs_list": [ 00:20:30.906 { 00:20:30.906 "name": null, 00:20:30.906 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:30.906 "is_configured": false, 00:20:30.906 "data_offset": 0, 00:20:30.906 "data_size": 63488 00:20:30.906 }, 00:20:30.906 { 00:20:30.906 "name": "BaseBdev2", 00:20:30.906 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:30.906 "is_configured": true, 00:20:30.906 "data_offset": 2048, 00:20:30.906 "data_size": 63488 00:20:30.906 }, 00:20:30.906 { 00:20:30.906 "name": "BaseBdev3", 00:20:30.906 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:30.906 "is_configured": true, 00:20:30.906 "data_offset": 2048, 00:20:30.906 "data_size": 63488 00:20:30.906 }, 00:20:30.906 { 00:20:30.906 "name": "BaseBdev4", 00:20:30.906 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:30.906 "is_configured": true, 00:20:30.906 "data_offset": 2048, 00:20:30.906 "data_size": 63488 00:20:30.906 } 00:20:30.906 ] 00:20:30.906 }' 00:20:30.906 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.906 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.165 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.165 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.165 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:31.165 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.165 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 748f296d-58bc-4273-8caf-dd9f07131dbc 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.424 [2024-12-05 19:40:24.718614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:31.424 NewBaseBdev 00:20:31.424 [2024-12-05 19:40:24.719171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:31.424 [2024-12-05 19:40:24.719197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:31.424 [2024-12-05 19:40:24.719591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.424 [2024-12-05 19:40:24.726339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:31.424 [2024-12-05 19:40:24.726522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:31.424 [2024-12-05 19:40:24.726878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.424 [ 00:20:31.424 { 00:20:31.424 "name": "NewBaseBdev", 00:20:31.424 "aliases": [ 00:20:31.424 "748f296d-58bc-4273-8caf-dd9f07131dbc" 00:20:31.424 ], 00:20:31.424 "product_name": "Malloc disk", 00:20:31.424 "block_size": 512, 00:20:31.424 "num_blocks": 65536, 00:20:31.424 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:31.424 "assigned_rate_limits": { 00:20:31.424 "rw_ios_per_sec": 0, 00:20:31.424 "rw_mbytes_per_sec": 0, 00:20:31.424 "r_mbytes_per_sec": 0, 00:20:31.424 "w_mbytes_per_sec": 0 00:20:31.424 }, 00:20:31.424 "claimed": true, 00:20:31.424 "claim_type": "exclusive_write", 00:20:31.424 "zoned": false, 00:20:31.424 "supported_io_types": { 00:20:31.424 "read": true, 00:20:31.424 "write": true, 00:20:31.424 "unmap": true, 00:20:31.424 "flush": true, 00:20:31.424 "reset": true, 00:20:31.424 "nvme_admin": false, 00:20:31.424 "nvme_io": false, 00:20:31.424 "nvme_io_md": false, 00:20:31.424 "write_zeroes": true, 00:20:31.424 "zcopy": true, 00:20:31.424 "get_zone_info": false, 00:20:31.424 "zone_management": false, 00:20:31.424 "zone_append": false, 00:20:31.424 "compare": false, 00:20:31.424 "compare_and_write": false, 00:20:31.424 "abort": true, 00:20:31.424 "seek_hole": false, 00:20:31.424 "seek_data": false, 00:20:31.424 "copy": true, 00:20:31.424 "nvme_iov_md": false 00:20:31.424 }, 00:20:31.424 "memory_domains": [ 00:20:31.424 { 00:20:31.424 "dma_device_id": "system", 00:20:31.424 "dma_device_type": 1 00:20:31.424 }, 00:20:31.424 { 00:20:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.424 "dma_device_type": 2 00:20:31.424 } 00:20:31.424 ], 00:20:31.424 "driver_specific": {} 00:20:31.424 } 00:20:31.424 ] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.424 "name": "Existed_Raid", 00:20:31.424 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:31.424 "strip_size_kb": 64, 00:20:31.424 "state": "online", 00:20:31.424 "raid_level": "raid5f", 00:20:31.424 "superblock": true, 00:20:31.424 "num_base_bdevs": 4, 00:20:31.424 "num_base_bdevs_discovered": 4, 00:20:31.424 "num_base_bdevs_operational": 4, 00:20:31.424 "base_bdevs_list": [ 00:20:31.424 { 00:20:31.424 "name": "NewBaseBdev", 00:20:31.424 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:31.424 "is_configured": true, 00:20:31.424 "data_offset": 2048, 00:20:31.424 "data_size": 63488 00:20:31.424 }, 00:20:31.424 { 00:20:31.424 "name": "BaseBdev2", 00:20:31.424 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:31.424 "is_configured": true, 00:20:31.424 "data_offset": 2048, 00:20:31.424 "data_size": 63488 00:20:31.424 }, 00:20:31.424 { 00:20:31.424 "name": "BaseBdev3", 00:20:31.424 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:31.424 "is_configured": true, 00:20:31.424 "data_offset": 2048, 00:20:31.424 "data_size": 63488 00:20:31.424 }, 00:20:31.424 { 00:20:31.424 "name": "BaseBdev4", 00:20:31.424 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:31.424 "is_configured": true, 00:20:31.424 "data_offset": 2048, 00:20:31.424 "data_size": 63488 00:20:31.424 } 00:20:31.424 ] 00:20:31.424 }' 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.424 19:40:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.992 [2024-12-05 19:40:25.339206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.992 "name": "Existed_Raid", 00:20:31.992 "aliases": [ 00:20:31.992 "df6b2050-9c5c-4194-a0f5-75a2b606e0a7" 00:20:31.992 ], 00:20:31.992 "product_name": "Raid Volume", 00:20:31.992 "block_size": 512, 00:20:31.992 "num_blocks": 190464, 00:20:31.992 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:31.992 "assigned_rate_limits": { 00:20:31.992 "rw_ios_per_sec": 0, 00:20:31.992 "rw_mbytes_per_sec": 0, 00:20:31.992 "r_mbytes_per_sec": 0, 00:20:31.992 "w_mbytes_per_sec": 0 00:20:31.992 }, 00:20:31.992 "claimed": false, 00:20:31.992 "zoned": false, 00:20:31.992 "supported_io_types": { 00:20:31.992 "read": true, 00:20:31.992 "write": true, 00:20:31.992 "unmap": false, 00:20:31.992 "flush": false, 00:20:31.992 "reset": true, 00:20:31.992 "nvme_admin": false, 00:20:31.992 "nvme_io": false, 00:20:31.992 "nvme_io_md": false, 00:20:31.992 "write_zeroes": true, 00:20:31.992 "zcopy": false, 00:20:31.992 "get_zone_info": false, 00:20:31.992 "zone_management": false, 00:20:31.992 "zone_append": false, 00:20:31.992 "compare": false, 00:20:31.992 "compare_and_write": false, 00:20:31.992 "abort": false, 00:20:31.992 "seek_hole": false, 00:20:31.992 "seek_data": false, 00:20:31.992 "copy": false, 00:20:31.992 "nvme_iov_md": false 00:20:31.992 }, 00:20:31.992 "driver_specific": { 00:20:31.992 "raid": { 00:20:31.992 "uuid": "df6b2050-9c5c-4194-a0f5-75a2b606e0a7", 00:20:31.992 "strip_size_kb": 64, 00:20:31.992 "state": "online", 00:20:31.992 "raid_level": "raid5f", 00:20:31.992 "superblock": true, 00:20:31.992 "num_base_bdevs": 4, 00:20:31.992 "num_base_bdevs_discovered": 4, 00:20:31.992 "num_base_bdevs_operational": 4, 00:20:31.992 "base_bdevs_list": [ 00:20:31.992 { 00:20:31.992 "name": "NewBaseBdev", 00:20:31.992 "uuid": "748f296d-58bc-4273-8caf-dd9f07131dbc", 00:20:31.992 "is_configured": true, 00:20:31.992 "data_offset": 2048, 00:20:31.992 "data_size": 63488 00:20:31.992 }, 00:20:31.992 { 00:20:31.992 "name": "BaseBdev2", 00:20:31.992 "uuid": "5038360e-a65c-4953-ab5d-bffe98bd145c", 00:20:31.992 "is_configured": true, 00:20:31.992 "data_offset": 2048, 00:20:31.992 "data_size": 63488 00:20:31.992 }, 00:20:31.992 { 00:20:31.992 "name": "BaseBdev3", 00:20:31.992 "uuid": "3e2115f6-94e1-4729-bcd8-a8a551d8b9ef", 00:20:31.992 "is_configured": true, 00:20:31.992 "data_offset": 2048, 00:20:31.992 "data_size": 63488 00:20:31.992 }, 00:20:31.992 { 00:20:31.992 "name": "BaseBdev4", 00:20:31.992 "uuid": "44d3effc-472e-477c-b74a-1791cb84a4ec", 00:20:31.992 "is_configured": true, 00:20:31.992 "data_offset": 2048, 00:20:31.992 "data_size": 63488 00:20:31.992 } 00:20:31.992 ] 00:20:31.992 } 00:20:31.992 } 00:20:31.992 }' 00:20:31.992 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:32.251 BaseBdev2 00:20:32.251 BaseBdev3 00:20:32.251 BaseBdev4' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.251 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.509 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.509 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.509 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:32.509 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.509 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.509 [2024-12-05 19:40:25.715079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.509 [2024-12-05 19:40:25.715152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.509 [2024-12-05 19:40:25.715259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.510 [2024-12-05 19:40:25.715652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.510 [2024-12-05 19:40:25.715680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83857 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83857 ']' 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83857 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83857 00:20:32.510 killing process with pid 83857 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83857' 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83857 00:20:32.510 [2024-12-05 19:40:25.751834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.510 19:40:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83857 00:20:32.769 [2024-12-05 19:40:26.112124] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.146 19:40:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:34.146 00:20:34.146 real 0m13.123s 00:20:34.146 user 0m21.745s 00:20:34.146 sys 0m1.801s 00:20:34.146 19:40:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.146 19:40:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.146 ************************************ 00:20:34.146 END TEST raid5f_state_function_test_sb 00:20:34.146 ************************************ 00:20:34.146 19:40:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:20:34.146 19:40:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:34.146 19:40:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.146 19:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.146 ************************************ 00:20:34.146 START TEST raid5f_superblock_test 00:20:34.146 ************************************ 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84540 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84540 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84540 ']' 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.146 19:40:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.146 [2024-12-05 19:40:27.373685] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:34.146 [2024-12-05 19:40:27.373926] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84540 ] 00:20:34.146 [2024-12-05 19:40:27.566361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.405 [2024-12-05 19:40:27.739994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.679 [2024-12-05 19:40:27.953638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.679 [2024-12-05 19:40:27.953757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.971 malloc1 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.971 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.971 [2024-12-05 19:40:28.409720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:34.971 [2024-12-05 19:40:28.409789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.971 [2024-12-05 19:40:28.409822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:34.971 [2024-12-05 19:40:28.409838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.230 [2024-12-05 19:40:28.412837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.230 [2024-12-05 19:40:28.412883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:35.230 pt1 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.230 malloc2 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.230 [2024-12-05 19:40:28.467898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:35.230 [2024-12-05 19:40:28.467973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.230 [2024-12-05 19:40:28.468013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:35.230 [2024-12-05 19:40:28.468029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.230 [2024-12-05 19:40:28.470916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.230 [2024-12-05 19:40:28.470961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:35.230 pt2 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.230 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 malloc3 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 [2024-12-05 19:40:28.539245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:35.231 [2024-12-05 19:40:28.539311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.231 [2024-12-05 19:40:28.539345] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:35.231 [2024-12-05 19:40:28.539371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.231 [2024-12-05 19:40:28.542196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.231 [2024-12-05 19:40:28.542240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:35.231 pt3 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 malloc4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 [2024-12-05 19:40:28.592227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:35.231 [2024-12-05 19:40:28.592338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.231 [2024-12-05 19:40:28.592370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:35.231 [2024-12-05 19:40:28.592385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.231 [2024-12-05 19:40:28.595236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.231 [2024-12-05 19:40:28.595279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:35.231 pt4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 [2024-12-05 19:40:28.600235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.231 [2024-12-05 19:40:28.602653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.231 [2024-12-05 19:40:28.602828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:35.231 [2024-12-05 19:40:28.602913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:35.231 [2024-12-05 19:40:28.603187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:35.231 [2024-12-05 19:40:28.603219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:35.231 [2024-12-05 19:40:28.603546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:35.231 [2024-12-05 19:40:28.610622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:35.231 [2024-12-05 19:40:28.610655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:35.231 [2024-12-05 19:40:28.610914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.231 "name": "raid_bdev1", 00:20:35.231 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:35.231 "strip_size_kb": 64, 00:20:35.231 "state": "online", 00:20:35.231 "raid_level": "raid5f", 00:20:35.231 "superblock": true, 00:20:35.231 "num_base_bdevs": 4, 00:20:35.231 "num_base_bdevs_discovered": 4, 00:20:35.231 "num_base_bdevs_operational": 4, 00:20:35.231 "base_bdevs_list": [ 00:20:35.231 { 00:20:35.231 "name": "pt1", 00:20:35.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.231 "is_configured": true, 00:20:35.231 "data_offset": 2048, 00:20:35.231 "data_size": 63488 00:20:35.231 }, 00:20:35.231 { 00:20:35.231 "name": "pt2", 00:20:35.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.231 "is_configured": true, 00:20:35.231 "data_offset": 2048, 00:20:35.231 "data_size": 63488 00:20:35.231 }, 00:20:35.231 { 00:20:35.231 "name": "pt3", 00:20:35.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.231 "is_configured": true, 00:20:35.231 "data_offset": 2048, 00:20:35.231 "data_size": 63488 00:20:35.231 }, 00:20:35.231 { 00:20:35.231 "name": "pt4", 00:20:35.231 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:35.231 "is_configured": true, 00:20:35.231 "data_offset": 2048, 00:20:35.231 "data_size": 63488 00:20:35.231 } 00:20:35.231 ] 00:20:35.231 }' 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.231 19:40:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:35.797 [2024-12-05 19:40:29.122801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.797 "name": "raid_bdev1", 00:20:35.797 "aliases": [ 00:20:35.797 "e2a971a4-5244-4d7a-9e76-0564d3f904f2" 00:20:35.797 ], 00:20:35.797 "product_name": "Raid Volume", 00:20:35.797 "block_size": 512, 00:20:35.797 "num_blocks": 190464, 00:20:35.797 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:35.797 "assigned_rate_limits": { 00:20:35.797 "rw_ios_per_sec": 0, 00:20:35.797 "rw_mbytes_per_sec": 0, 00:20:35.797 "r_mbytes_per_sec": 0, 00:20:35.797 "w_mbytes_per_sec": 0 00:20:35.797 }, 00:20:35.797 "claimed": false, 00:20:35.797 "zoned": false, 00:20:35.797 "supported_io_types": { 00:20:35.797 "read": true, 00:20:35.797 "write": true, 00:20:35.797 "unmap": false, 00:20:35.797 "flush": false, 00:20:35.797 "reset": true, 00:20:35.797 "nvme_admin": false, 00:20:35.797 "nvme_io": false, 00:20:35.797 "nvme_io_md": false, 00:20:35.797 "write_zeroes": true, 00:20:35.797 "zcopy": false, 00:20:35.797 "get_zone_info": false, 00:20:35.797 "zone_management": false, 00:20:35.797 "zone_append": false, 00:20:35.797 "compare": false, 00:20:35.797 "compare_and_write": false, 00:20:35.797 "abort": false, 00:20:35.797 "seek_hole": false, 00:20:35.797 "seek_data": false, 00:20:35.797 "copy": false, 00:20:35.797 "nvme_iov_md": false 00:20:35.797 }, 00:20:35.797 "driver_specific": { 00:20:35.797 "raid": { 00:20:35.797 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:35.797 "strip_size_kb": 64, 00:20:35.797 "state": "online", 00:20:35.797 "raid_level": "raid5f", 00:20:35.797 "superblock": true, 00:20:35.797 "num_base_bdevs": 4, 00:20:35.797 "num_base_bdevs_discovered": 4, 00:20:35.797 "num_base_bdevs_operational": 4, 00:20:35.797 "base_bdevs_list": [ 00:20:35.797 { 00:20:35.797 "name": "pt1", 00:20:35.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.797 "is_configured": true, 00:20:35.797 "data_offset": 2048, 00:20:35.797 "data_size": 63488 00:20:35.797 }, 00:20:35.797 { 00:20:35.797 "name": "pt2", 00:20:35.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.797 "is_configured": true, 00:20:35.797 "data_offset": 2048, 00:20:35.797 "data_size": 63488 00:20:35.797 }, 00:20:35.797 { 00:20:35.797 "name": "pt3", 00:20:35.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.797 "is_configured": true, 00:20:35.797 "data_offset": 2048, 00:20:35.797 "data_size": 63488 00:20:35.797 }, 00:20:35.797 { 00:20:35.797 "name": "pt4", 00:20:35.797 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:35.797 "is_configured": true, 00:20:35.797 "data_offset": 2048, 00:20:35.797 "data_size": 63488 00:20:35.797 } 00:20:35.797 ] 00:20:35.797 } 00:20:35.797 } 00:20:35.797 }' 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:35.797 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:35.797 pt2 00:20:35.797 pt3 00:20:35.797 pt4' 00:20:35.798 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.056 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.057 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:36.315 [2024-12-05 19:40:29.502848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e2a971a4-5244-4d7a-9e76-0564d3f904f2 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e2a971a4-5244-4d7a-9e76-0564d3f904f2 ']' 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.315 [2024-12-05 19:40:29.558605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.315 [2024-12-05 19:40:29.558636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.315 [2024-12-05 19:40:29.558756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.315 [2024-12-05 19:40:29.558873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.315 [2024-12-05 19:40:29.558897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:36.315 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 [2024-12-05 19:40:29.714685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:36.316 [2024-12-05 19:40:29.717252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:36.316 [2024-12-05 19:40:29.717327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:36.316 [2024-12-05 19:40:29.717384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:36.316 [2024-12-05 19:40:29.717463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:36.316 [2024-12-05 19:40:29.717535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:36.316 [2024-12-05 19:40:29.717569] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:36.316 [2024-12-05 19:40:29.717601] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:36.316 [2024-12-05 19:40:29.717623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.316 [2024-12-05 19:40:29.717639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:36.316 request: 00:20:36.316 { 00:20:36.316 "name": "raid_bdev1", 00:20:36.316 "raid_level": "raid5f", 00:20:36.316 "base_bdevs": [ 00:20:36.316 "malloc1", 00:20:36.316 "malloc2", 00:20:36.316 "malloc3", 00:20:36.316 "malloc4" 00:20:36.316 ], 00:20:36.316 "strip_size_kb": 64, 00:20:36.316 "superblock": false, 00:20:36.316 "method": "bdev_raid_create", 00:20:36.316 "req_id": 1 00:20:36.316 } 00:20:36.316 Got JSON-RPC error response 00:20:36.316 response: 00:20:36.316 { 00:20:36.316 "code": -17, 00:20:36.316 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:36.316 } 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.316 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.574 [2024-12-05 19:40:29.778657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:36.574 [2024-12-05 19:40:29.778764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.574 [2024-12-05 19:40:29.778791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:36.574 [2024-12-05 19:40:29.778808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.574 [2024-12-05 19:40:29.781805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.574 [2024-12-05 19:40:29.781869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:36.574 [2024-12-05 19:40:29.781961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:36.574 [2024-12-05 19:40:29.782031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:36.574 pt1 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.574 "name": "raid_bdev1", 00:20:36.574 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:36.574 "strip_size_kb": 64, 00:20:36.574 "state": "configuring", 00:20:36.574 "raid_level": "raid5f", 00:20:36.574 "superblock": true, 00:20:36.574 "num_base_bdevs": 4, 00:20:36.574 "num_base_bdevs_discovered": 1, 00:20:36.574 "num_base_bdevs_operational": 4, 00:20:36.574 "base_bdevs_list": [ 00:20:36.574 { 00:20:36.574 "name": "pt1", 00:20:36.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.574 "is_configured": true, 00:20:36.574 "data_offset": 2048, 00:20:36.574 "data_size": 63488 00:20:36.574 }, 00:20:36.574 { 00:20:36.574 "name": null, 00:20:36.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.574 "is_configured": false, 00:20:36.574 "data_offset": 2048, 00:20:36.574 "data_size": 63488 00:20:36.574 }, 00:20:36.574 { 00:20:36.574 "name": null, 00:20:36.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.574 "is_configured": false, 00:20:36.574 "data_offset": 2048, 00:20:36.574 "data_size": 63488 00:20:36.574 }, 00:20:36.574 { 00:20:36.574 "name": null, 00:20:36.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.574 "is_configured": false, 00:20:36.574 "data_offset": 2048, 00:20:36.574 "data_size": 63488 00:20:36.574 } 00:20:36.574 ] 00:20:36.574 }' 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.574 19:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.144 [2024-12-05 19:40:30.298929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.144 [2024-12-05 19:40:30.299018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.144 [2024-12-05 19:40:30.299048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:37.144 [2024-12-05 19:40:30.299065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.144 [2024-12-05 19:40:30.299676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.144 [2024-12-05 19:40:30.299764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.144 [2024-12-05 19:40:30.299888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.144 [2024-12-05 19:40:30.299949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.144 pt2 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.144 [2024-12-05 19:40:30.306929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.144 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.144 "name": "raid_bdev1", 00:20:37.144 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:37.144 "strip_size_kb": 64, 00:20:37.144 "state": "configuring", 00:20:37.144 "raid_level": "raid5f", 00:20:37.144 "superblock": true, 00:20:37.144 "num_base_bdevs": 4, 00:20:37.144 "num_base_bdevs_discovered": 1, 00:20:37.144 "num_base_bdevs_operational": 4, 00:20:37.144 "base_bdevs_list": [ 00:20:37.144 { 00:20:37.144 "name": "pt1", 00:20:37.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:37.144 "is_configured": true, 00:20:37.144 "data_offset": 2048, 00:20:37.145 "data_size": 63488 00:20:37.145 }, 00:20:37.145 { 00:20:37.145 "name": null, 00:20:37.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.145 "is_configured": false, 00:20:37.145 "data_offset": 0, 00:20:37.145 "data_size": 63488 00:20:37.145 }, 00:20:37.145 { 00:20:37.145 "name": null, 00:20:37.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.145 "is_configured": false, 00:20:37.145 "data_offset": 2048, 00:20:37.145 "data_size": 63488 00:20:37.145 }, 00:20:37.145 { 00:20:37.145 "name": null, 00:20:37.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:37.145 "is_configured": false, 00:20:37.145 "data_offset": 2048, 00:20:37.145 "data_size": 63488 00:20:37.145 } 00:20:37.145 ] 00:20:37.145 }' 00:20:37.145 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.145 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.404 [2024-12-05 19:40:30.835123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.404 [2024-12-05 19:40:30.835241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.404 [2024-12-05 19:40:30.835270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:37.404 [2024-12-05 19:40:30.835284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.404 [2024-12-05 19:40:30.835933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.404 [2024-12-05 19:40:30.835969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.404 [2024-12-05 19:40:30.836074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.404 [2024-12-05 19:40:30.836106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.404 pt2 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.404 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 [2024-12-05 19:40:30.847070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:37.663 [2024-12-05 19:40:30.847164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.663 [2024-12-05 19:40:30.847219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:37.663 [2024-12-05 19:40:30.847250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.663 [2024-12-05 19:40:30.847703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.663 [2024-12-05 19:40:30.847774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:37.663 [2024-12-05 19:40:30.847864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:37.663 [2024-12-05 19:40:30.847900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:37.663 pt3 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 [2024-12-05 19:40:30.855030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:37.663 [2024-12-05 19:40:30.855078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.663 [2024-12-05 19:40:30.855106] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:37.663 [2024-12-05 19:40:30.855120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.663 [2024-12-05 19:40:30.855574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.663 [2024-12-05 19:40:30.855613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:37.663 [2024-12-05 19:40:30.855693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:37.663 [2024-12-05 19:40:30.855744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:37.663 [2024-12-05 19:40:30.855930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:37.663 [2024-12-05 19:40:30.855956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:37.663 [2024-12-05 19:40:30.856255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.663 [2024-12-05 19:40:30.862952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:37.663 [2024-12-05 19:40:30.862986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:37.663 [2024-12-05 19:40:30.863282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.663 pt4 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.663 "name": "raid_bdev1", 00:20:37.663 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:37.663 "strip_size_kb": 64, 00:20:37.663 "state": "online", 00:20:37.663 "raid_level": "raid5f", 00:20:37.663 "superblock": true, 00:20:37.663 "num_base_bdevs": 4, 00:20:37.663 "num_base_bdevs_discovered": 4, 00:20:37.663 "num_base_bdevs_operational": 4, 00:20:37.663 "base_bdevs_list": [ 00:20:37.663 { 00:20:37.663 "name": "pt1", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt2", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt3", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt4", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 } 00:20:37.663 ] 00:20:37.663 }' 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.663 19:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.005 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.006 [2024-12-05 19:40:31.391150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.006 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.265 "name": "raid_bdev1", 00:20:38.265 "aliases": [ 00:20:38.265 "e2a971a4-5244-4d7a-9e76-0564d3f904f2" 00:20:38.265 ], 00:20:38.265 "product_name": "Raid Volume", 00:20:38.265 "block_size": 512, 00:20:38.265 "num_blocks": 190464, 00:20:38.265 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:38.265 "assigned_rate_limits": { 00:20:38.265 "rw_ios_per_sec": 0, 00:20:38.265 "rw_mbytes_per_sec": 0, 00:20:38.265 "r_mbytes_per_sec": 0, 00:20:38.265 "w_mbytes_per_sec": 0 00:20:38.265 }, 00:20:38.265 "claimed": false, 00:20:38.265 "zoned": false, 00:20:38.265 "supported_io_types": { 00:20:38.265 "read": true, 00:20:38.265 "write": true, 00:20:38.265 "unmap": false, 00:20:38.265 "flush": false, 00:20:38.265 "reset": true, 00:20:38.265 "nvme_admin": false, 00:20:38.265 "nvme_io": false, 00:20:38.265 "nvme_io_md": false, 00:20:38.265 "write_zeroes": true, 00:20:38.265 "zcopy": false, 00:20:38.265 "get_zone_info": false, 00:20:38.265 "zone_management": false, 00:20:38.265 "zone_append": false, 00:20:38.265 "compare": false, 00:20:38.265 "compare_and_write": false, 00:20:38.265 "abort": false, 00:20:38.265 "seek_hole": false, 00:20:38.265 "seek_data": false, 00:20:38.265 "copy": false, 00:20:38.265 "nvme_iov_md": false 00:20:38.265 }, 00:20:38.265 "driver_specific": { 00:20:38.265 "raid": { 00:20:38.265 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:38.265 "strip_size_kb": 64, 00:20:38.265 "state": "online", 00:20:38.265 "raid_level": "raid5f", 00:20:38.265 "superblock": true, 00:20:38.265 "num_base_bdevs": 4, 00:20:38.265 "num_base_bdevs_discovered": 4, 00:20:38.265 "num_base_bdevs_operational": 4, 00:20:38.265 "base_bdevs_list": [ 00:20:38.265 { 00:20:38.265 "name": "pt1", 00:20:38.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.265 "is_configured": true, 00:20:38.265 "data_offset": 2048, 00:20:38.265 "data_size": 63488 00:20:38.265 }, 00:20:38.265 { 00:20:38.265 "name": "pt2", 00:20:38.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.265 "is_configured": true, 00:20:38.265 "data_offset": 2048, 00:20:38.265 "data_size": 63488 00:20:38.265 }, 00:20:38.265 { 00:20:38.265 "name": "pt3", 00:20:38.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.265 "is_configured": true, 00:20:38.265 "data_offset": 2048, 00:20:38.265 "data_size": 63488 00:20:38.265 }, 00:20:38.265 { 00:20:38.265 "name": "pt4", 00:20:38.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.265 "is_configured": true, 00:20:38.265 "data_offset": 2048, 00:20:38.265 "data_size": 63488 00:20:38.265 } 00:20:38.265 ] 00:20:38.265 } 00:20:38.265 } 00:20:38.265 }' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:38.265 pt2 00:20:38.265 pt3 00:20:38.265 pt4' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.265 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:38.523 [2024-12-05 19:40:31.759262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e2a971a4-5244-4d7a-9e76-0564d3f904f2 '!=' e2a971a4-5244-4d7a-9e76-0564d3f904f2 ']' 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.523 [2024-12-05 19:40:31.811170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.523 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.524 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.524 "name": "raid_bdev1", 00:20:38.524 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:38.524 "strip_size_kb": 64, 00:20:38.524 "state": "online", 00:20:38.524 "raid_level": "raid5f", 00:20:38.524 "superblock": true, 00:20:38.524 "num_base_bdevs": 4, 00:20:38.524 "num_base_bdevs_discovered": 3, 00:20:38.524 "num_base_bdevs_operational": 3, 00:20:38.524 "base_bdevs_list": [ 00:20:38.524 { 00:20:38.524 "name": null, 00:20:38.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.524 "is_configured": false, 00:20:38.524 "data_offset": 0, 00:20:38.524 "data_size": 63488 00:20:38.524 }, 00:20:38.524 { 00:20:38.524 "name": "pt2", 00:20:38.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.524 "is_configured": true, 00:20:38.524 "data_offset": 2048, 00:20:38.524 "data_size": 63488 00:20:38.524 }, 00:20:38.524 { 00:20:38.524 "name": "pt3", 00:20:38.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.524 "is_configured": true, 00:20:38.524 "data_offset": 2048, 00:20:38.524 "data_size": 63488 00:20:38.524 }, 00:20:38.524 { 00:20:38.524 "name": "pt4", 00:20:38.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.524 "is_configured": true, 00:20:38.524 "data_offset": 2048, 00:20:38.524 "data_size": 63488 00:20:38.524 } 00:20:38.524 ] 00:20:38.524 }' 00:20:38.524 19:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.524 19:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 [2024-12-05 19:40:32.383273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.092 [2024-12-05 19:40:32.383313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.092 [2024-12-05 19:40:32.383410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.092 [2024-12-05 19:40:32.383581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.092 [2024-12-05 19:40:32.383613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.092 [2024-12-05 19:40:32.471232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.092 [2024-12-05 19:40:32.471295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.092 [2024-12-05 19:40:32.471324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:39.092 [2024-12-05 19:40:32.471339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.092 [2024-12-05 19:40:32.474474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.092 [2024-12-05 19:40:32.474518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.092 [2024-12-05 19:40:32.474634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:39.092 [2024-12-05 19:40:32.474710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.092 pt2 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.092 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.093 "name": "raid_bdev1", 00:20:39.093 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:39.093 "strip_size_kb": 64, 00:20:39.093 "state": "configuring", 00:20:39.093 "raid_level": "raid5f", 00:20:39.093 "superblock": true, 00:20:39.093 "num_base_bdevs": 4, 00:20:39.093 "num_base_bdevs_discovered": 1, 00:20:39.093 "num_base_bdevs_operational": 3, 00:20:39.093 "base_bdevs_list": [ 00:20:39.093 { 00:20:39.093 "name": null, 00:20:39.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.093 "is_configured": false, 00:20:39.093 "data_offset": 2048, 00:20:39.093 "data_size": 63488 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": "pt2", 00:20:39.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.093 "is_configured": true, 00:20:39.093 "data_offset": 2048, 00:20:39.093 "data_size": 63488 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": null, 00:20:39.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.093 "is_configured": false, 00:20:39.093 "data_offset": 2048, 00:20:39.093 "data_size": 63488 00:20:39.093 }, 00:20:39.093 { 00:20:39.093 "name": null, 00:20:39.093 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.093 "is_configured": false, 00:20:39.093 "data_offset": 2048, 00:20:39.093 "data_size": 63488 00:20:39.093 } 00:20:39.093 ] 00:20:39.093 }' 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.093 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.660 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:39.660 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:39.660 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:39.660 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.660 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.661 [2024-12-05 19:40:32.995471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:39.661 [2024-12-05 19:40:32.995587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.661 [2024-12-05 19:40:32.995626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:39.661 [2024-12-05 19:40:32.995643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.661 [2024-12-05 19:40:32.996256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.661 [2024-12-05 19:40:32.996292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:39.661 [2024-12-05 19:40:32.996415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:39.661 [2024-12-05 19:40:32.996447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:39.661 pt3 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.661 19:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.661 "name": "raid_bdev1", 00:20:39.661 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:39.661 "strip_size_kb": 64, 00:20:39.661 "state": "configuring", 00:20:39.661 "raid_level": "raid5f", 00:20:39.661 "superblock": true, 00:20:39.661 "num_base_bdevs": 4, 00:20:39.661 "num_base_bdevs_discovered": 2, 00:20:39.661 "num_base_bdevs_operational": 3, 00:20:39.661 "base_bdevs_list": [ 00:20:39.661 { 00:20:39.661 "name": null, 00:20:39.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.661 "is_configured": false, 00:20:39.661 "data_offset": 2048, 00:20:39.661 "data_size": 63488 00:20:39.661 }, 00:20:39.661 { 00:20:39.661 "name": "pt2", 00:20:39.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.661 "is_configured": true, 00:20:39.661 "data_offset": 2048, 00:20:39.661 "data_size": 63488 00:20:39.661 }, 00:20:39.661 { 00:20:39.661 "name": "pt3", 00:20:39.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.661 "is_configured": true, 00:20:39.661 "data_offset": 2048, 00:20:39.661 "data_size": 63488 00:20:39.661 }, 00:20:39.661 { 00:20:39.661 "name": null, 00:20:39.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.661 "is_configured": false, 00:20:39.661 "data_offset": 2048, 00:20:39.661 "data_size": 63488 00:20:39.661 } 00:20:39.661 ] 00:20:39.661 }' 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.661 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 [2024-12-05 19:40:33.527673] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:40.230 [2024-12-05 19:40:33.527781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.230 [2024-12-05 19:40:33.527817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:40.230 [2024-12-05 19:40:33.527834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.230 [2024-12-05 19:40:33.528460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.230 [2024-12-05 19:40:33.528495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:40.230 [2024-12-05 19:40:33.528639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:40.230 [2024-12-05 19:40:33.528690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:40.230 [2024-12-05 19:40:33.528918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:40.230 [2024-12-05 19:40:33.528944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:40.230 [2024-12-05 19:40:33.529257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:40.230 [2024-12-05 19:40:33.536056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:40.230 [2024-12-05 19:40:33.536091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:40.230 [2024-12-05 19:40:33.536442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.230 pt4 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.230 "name": "raid_bdev1", 00:20:40.230 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:40.230 "strip_size_kb": 64, 00:20:40.230 "state": "online", 00:20:40.230 "raid_level": "raid5f", 00:20:40.230 "superblock": true, 00:20:40.230 "num_base_bdevs": 4, 00:20:40.230 "num_base_bdevs_discovered": 3, 00:20:40.230 "num_base_bdevs_operational": 3, 00:20:40.230 "base_bdevs_list": [ 00:20:40.230 { 00:20:40.230 "name": null, 00:20:40.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.230 "is_configured": false, 00:20:40.230 "data_offset": 2048, 00:20:40.230 "data_size": 63488 00:20:40.230 }, 00:20:40.230 { 00:20:40.230 "name": "pt2", 00:20:40.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.230 "is_configured": true, 00:20:40.230 "data_offset": 2048, 00:20:40.230 "data_size": 63488 00:20:40.230 }, 00:20:40.230 { 00:20:40.230 "name": "pt3", 00:20:40.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.230 "is_configured": true, 00:20:40.230 "data_offset": 2048, 00:20:40.230 "data_size": 63488 00:20:40.230 }, 00:20:40.230 { 00:20:40.230 "name": "pt4", 00:20:40.230 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.230 "is_configured": true, 00:20:40.230 "data_offset": 2048, 00:20:40.230 "data_size": 63488 00:20:40.230 } 00:20:40.230 ] 00:20:40.230 }' 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.230 19:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 [2024-12-05 19:40:34.112104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.799 [2024-12-05 19:40:34.112142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.799 [2024-12-05 19:40:34.112251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.799 [2024-12-05 19:40:34.112352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.799 [2024-12-05 19:40:34.112373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 [2024-12-05 19:40:34.180087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:40.799 [2024-12-05 19:40:34.180157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.799 [2024-12-05 19:40:34.180198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:40.799 [2024-12-05 19:40:34.180219] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.799 [2024-12-05 19:40:34.183301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.799 [2024-12-05 19:40:34.183376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:40.799 [2024-12-05 19:40:34.183507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:40.799 [2024-12-05 19:40:34.183570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:40.799 [2024-12-05 19:40:34.183775] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:40.799 [2024-12-05 19:40:34.183807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.799 [2024-12-05 19:40:34.183837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:40.799 [2024-12-05 19:40:34.183947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.799 [2024-12-05 19:40:34.184105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:40.799 pt1 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.799 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.059 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.059 "name": "raid_bdev1", 00:20:41.059 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:41.059 "strip_size_kb": 64, 00:20:41.059 "state": "configuring", 00:20:41.059 "raid_level": "raid5f", 00:20:41.059 "superblock": true, 00:20:41.059 "num_base_bdevs": 4, 00:20:41.059 "num_base_bdevs_discovered": 2, 00:20:41.059 "num_base_bdevs_operational": 3, 00:20:41.059 "base_bdevs_list": [ 00:20:41.059 { 00:20:41.059 "name": null, 00:20:41.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.059 "is_configured": false, 00:20:41.059 "data_offset": 2048, 00:20:41.059 "data_size": 63488 00:20:41.059 }, 00:20:41.059 { 00:20:41.059 "name": "pt2", 00:20:41.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.059 "is_configured": true, 00:20:41.059 "data_offset": 2048, 00:20:41.059 "data_size": 63488 00:20:41.059 }, 00:20:41.059 { 00:20:41.059 "name": "pt3", 00:20:41.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.059 "is_configured": true, 00:20:41.059 "data_offset": 2048, 00:20:41.059 "data_size": 63488 00:20:41.059 }, 00:20:41.059 { 00:20:41.059 "name": null, 00:20:41.059 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.059 "is_configured": false, 00:20:41.059 "data_offset": 2048, 00:20:41.059 "data_size": 63488 00:20:41.059 } 00:20:41.059 ] 00:20:41.059 }' 00:20:41.059 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.059 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.321 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:41.321 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:41.321 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.321 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.321 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.610 [2024-12-05 19:40:34.764410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:41.610 [2024-12-05 19:40:34.764489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.610 [2024-12-05 19:40:34.764524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:41.610 [2024-12-05 19:40:34.764540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.610 [2024-12-05 19:40:34.765152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.610 [2024-12-05 19:40:34.765202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:41.610 [2024-12-05 19:40:34.765311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:41.610 [2024-12-05 19:40:34.765345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:41.610 [2024-12-05 19:40:34.765515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:41.610 [2024-12-05 19:40:34.765547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:41.610 [2024-12-05 19:40:34.765877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:41.610 [2024-12-05 19:40:34.772578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:41.610 [2024-12-05 19:40:34.772623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:41.610 [2024-12-05 19:40:34.773023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.610 pt4 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.610 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.610 "name": "raid_bdev1", 00:20:41.610 "uuid": "e2a971a4-5244-4d7a-9e76-0564d3f904f2", 00:20:41.610 "strip_size_kb": 64, 00:20:41.610 "state": "online", 00:20:41.610 "raid_level": "raid5f", 00:20:41.610 "superblock": true, 00:20:41.610 "num_base_bdevs": 4, 00:20:41.610 "num_base_bdevs_discovered": 3, 00:20:41.610 "num_base_bdevs_operational": 3, 00:20:41.610 "base_bdevs_list": [ 00:20:41.610 { 00:20:41.610 "name": null, 00:20:41.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.610 "is_configured": false, 00:20:41.610 "data_offset": 2048, 00:20:41.610 "data_size": 63488 00:20:41.610 }, 00:20:41.610 { 00:20:41.610 "name": "pt2", 00:20:41.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.610 "is_configured": true, 00:20:41.610 "data_offset": 2048, 00:20:41.610 "data_size": 63488 00:20:41.610 }, 00:20:41.610 { 00:20:41.610 "name": "pt3", 00:20:41.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.610 "is_configured": true, 00:20:41.610 "data_offset": 2048, 00:20:41.610 "data_size": 63488 00:20:41.610 }, 00:20:41.610 { 00:20:41.610 "name": "pt4", 00:20:41.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.610 "is_configured": true, 00:20:41.610 "data_offset": 2048, 00:20:41.610 "data_size": 63488 00:20:41.610 } 00:20:41.610 ] 00:20:41.610 }' 00:20:41.611 19:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.611 19:40:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.192 [2024-12-05 19:40:35.420946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e2a971a4-5244-4d7a-9e76-0564d3f904f2 '!=' e2a971a4-5244-4d7a-9e76-0564d3f904f2 ']' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84540 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84540 ']' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84540 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84540 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.192 killing process with pid 84540 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84540' 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84540 00:20:42.192 [2024-12-05 19:40:35.515724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.192 19:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84540 00:20:42.192 [2024-12-05 19:40:35.515870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.192 [2024-12-05 19:40:35.515978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.192 [2024-12-05 19:40:35.516009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:42.470 [2024-12-05 19:40:35.878318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.861 19:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:43.861 00:20:43.861 real 0m9.702s 00:20:43.861 user 0m15.895s 00:20:43.861 sys 0m1.451s 00:20:43.861 19:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.861 19:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 ************************************ 00:20:43.861 END TEST raid5f_superblock_test 00:20:43.861 ************************************ 00:20:43.861 19:40:37 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:43.861 19:40:37 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:43.861 19:40:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:43.861 19:40:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.861 19:40:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 ************************************ 00:20:43.861 START TEST raid5f_rebuild_test 00:20:43.861 ************************************ 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85037 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85037 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85037 ']' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.861 19:40:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 [2024-12-05 19:40:37.150558] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:20:43.861 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:43.861 Zero copy mechanism will not be used. 00:20:43.861 [2024-12-05 19:40:37.150787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85037 ] 00:20:44.120 [2024-12-05 19:40:37.339138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.120 [2024-12-05 19:40:37.475582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.379 [2024-12-05 19:40:37.690008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.379 [2024-12-05 19:40:37.690063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 BaseBdev1_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 [2024-12-05 19:40:38.252902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:44.947 [2024-12-05 19:40:38.252978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.947 [2024-12-05 19:40:38.253010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:44.947 [2024-12-05 19:40:38.253029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.947 [2024-12-05 19:40:38.256031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.947 BaseBdev1 00:20:44.947 [2024-12-05 19:40:38.256214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 BaseBdev2_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 [2024-12-05 19:40:38.303852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:44.947 [2024-12-05 19:40:38.304073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.947 [2024-12-05 19:40:38.304148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:44.947 [2024-12-05 19:40:38.304277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.947 [2024-12-05 19:40:38.307200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.947 BaseBdev2 00:20:44.947 [2024-12-05 19:40:38.307404] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 BaseBdev3_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.947 [2024-12-05 19:40:38.371384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:44.947 [2024-12-05 19:40:38.371492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.947 [2024-12-05 19:40:38.371522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:44.947 [2024-12-05 19:40:38.371540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.947 [2024-12-05 19:40:38.374360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.947 [2024-12-05 19:40:38.374424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:44.947 BaseBdev3 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.947 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.206 BaseBdev4_malloc 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 [2024-12-05 19:40:38.422358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:45.207 [2024-12-05 19:40:38.422484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.207 [2024-12-05 19:40:38.422600] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:45.207 [2024-12-05 19:40:38.422734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.207 [2024-12-05 19:40:38.425616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.207 BaseBdev4 00:20:45.207 [2024-12-05 19:40:38.425806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 spare_malloc 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 spare_delay 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 [2024-12-05 19:40:38.485738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.207 [2024-12-05 19:40:38.485981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.207 [2024-12-05 19:40:38.486051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:45.207 [2024-12-05 19:40:38.486272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.207 [2024-12-05 19:40:38.489169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.207 [2024-12-05 19:40:38.489232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.207 spare 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 [2024-12-05 19:40:38.493885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.207 [2024-12-05 19:40:38.496593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.207 [2024-12-05 19:40:38.496819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.207 [2024-12-05 19:40:38.496952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.207 [2024-12-05 19:40:38.497129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:45.207 [2024-12-05 19:40:38.497186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:45.207 [2024-12-05 19:40:38.497632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:45.207 [2024-12-05 19:40:38.504795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:45.207 [2024-12-05 19:40:38.504822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:45.207 [2024-12-05 19:40:38.505065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.207 "name": "raid_bdev1", 00:20:45.207 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:45.207 "strip_size_kb": 64, 00:20:45.207 "state": "online", 00:20:45.207 "raid_level": "raid5f", 00:20:45.207 "superblock": false, 00:20:45.207 "num_base_bdevs": 4, 00:20:45.207 "num_base_bdevs_discovered": 4, 00:20:45.207 "num_base_bdevs_operational": 4, 00:20:45.207 "base_bdevs_list": [ 00:20:45.207 { 00:20:45.207 "name": "BaseBdev1", 00:20:45.207 "uuid": "111e937b-6dc9-582c-b1e2-441f37085716", 00:20:45.207 "is_configured": true, 00:20:45.207 "data_offset": 0, 00:20:45.207 "data_size": 65536 00:20:45.207 }, 00:20:45.207 { 00:20:45.207 "name": "BaseBdev2", 00:20:45.207 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:45.207 "is_configured": true, 00:20:45.207 "data_offset": 0, 00:20:45.207 "data_size": 65536 00:20:45.207 }, 00:20:45.207 { 00:20:45.207 "name": "BaseBdev3", 00:20:45.207 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:45.207 "is_configured": true, 00:20:45.207 "data_offset": 0, 00:20:45.207 "data_size": 65536 00:20:45.207 }, 00:20:45.207 { 00:20:45.207 "name": "BaseBdev4", 00:20:45.207 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:45.207 "is_configured": true, 00:20:45.207 "data_offset": 0, 00:20:45.207 "data_size": 65536 00:20:45.207 } 00:20:45.207 ] 00:20:45.207 }' 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.207 19:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 [2024-12-05 19:40:39.017052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:45.776 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:46.036 [2024-12-05 19:40:39.412978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:46.036 /dev/nbd0 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.036 1+0 records in 00:20:46.036 1+0 records out 00:20:46.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033593 s, 12.2 MB/s 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:46.036 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:46.296 19:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:46.894 512+0 records in 00:20:46.894 512+0 records out 00:20:46.894 100663296 bytes (101 MB, 96 MiB) copied, 0.646932 s, 156 MB/s 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.894 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:47.152 [2024-12-05 19:40:40.423606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:47.152 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.153 [2024-12-05 19:40:40.463173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.153 "name": "raid_bdev1", 00:20:47.153 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:47.153 "strip_size_kb": 64, 00:20:47.153 "state": "online", 00:20:47.153 "raid_level": "raid5f", 00:20:47.153 "superblock": false, 00:20:47.153 "num_base_bdevs": 4, 00:20:47.153 "num_base_bdevs_discovered": 3, 00:20:47.153 "num_base_bdevs_operational": 3, 00:20:47.153 "base_bdevs_list": [ 00:20:47.153 { 00:20:47.153 "name": null, 00:20:47.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.153 "is_configured": false, 00:20:47.153 "data_offset": 0, 00:20:47.153 "data_size": 65536 00:20:47.153 }, 00:20:47.153 { 00:20:47.153 "name": "BaseBdev2", 00:20:47.153 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:47.153 "is_configured": true, 00:20:47.153 "data_offset": 0, 00:20:47.153 "data_size": 65536 00:20:47.153 }, 00:20:47.153 { 00:20:47.153 "name": "BaseBdev3", 00:20:47.153 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:47.153 "is_configured": true, 00:20:47.153 "data_offset": 0, 00:20:47.153 "data_size": 65536 00:20:47.153 }, 00:20:47.153 { 00:20:47.153 "name": "BaseBdev4", 00:20:47.153 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:47.153 "is_configured": true, 00:20:47.153 "data_offset": 0, 00:20:47.153 "data_size": 65536 00:20:47.153 } 00:20:47.153 ] 00:20:47.153 }' 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.153 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.721 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:47.722 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.722 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.722 [2024-12-05 19:40:40.983373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.722 [2024-12-05 19:40:40.997875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:47.722 19:40:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.722 19:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:47.722 [2024-12-05 19:40:41.007238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.659 "name": "raid_bdev1", 00:20:48.659 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:48.659 "strip_size_kb": 64, 00:20:48.659 "state": "online", 00:20:48.659 "raid_level": "raid5f", 00:20:48.659 "superblock": false, 00:20:48.659 "num_base_bdevs": 4, 00:20:48.659 "num_base_bdevs_discovered": 4, 00:20:48.659 "num_base_bdevs_operational": 4, 00:20:48.659 "process": { 00:20:48.659 "type": "rebuild", 00:20:48.659 "target": "spare", 00:20:48.659 "progress": { 00:20:48.659 "blocks": 17280, 00:20:48.659 "percent": 8 00:20:48.659 } 00:20:48.659 }, 00:20:48.659 "base_bdevs_list": [ 00:20:48.659 { 00:20:48.659 "name": "spare", 00:20:48.659 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:48.659 "is_configured": true, 00:20:48.659 "data_offset": 0, 00:20:48.659 "data_size": 65536 00:20:48.659 }, 00:20:48.659 { 00:20:48.659 "name": "BaseBdev2", 00:20:48.659 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:48.659 "is_configured": true, 00:20:48.659 "data_offset": 0, 00:20:48.659 "data_size": 65536 00:20:48.659 }, 00:20:48.659 { 00:20:48.659 "name": "BaseBdev3", 00:20:48.659 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:48.659 "is_configured": true, 00:20:48.659 "data_offset": 0, 00:20:48.659 "data_size": 65536 00:20:48.659 }, 00:20:48.659 { 00:20:48.659 "name": "BaseBdev4", 00:20:48.659 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:48.659 "is_configured": true, 00:20:48.659 "data_offset": 0, 00:20:48.659 "data_size": 65536 00:20:48.659 } 00:20:48.659 ] 00:20:48.659 }' 00:20:48.659 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.919 [2024-12-05 19:40:42.168377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.919 [2024-12-05 19:40:42.220624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:48.919 [2024-12-05 19:40:42.220983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.919 [2024-12-05 19:40:42.221017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.919 [2024-12-05 19:40:42.221034] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.919 "name": "raid_bdev1", 00:20:48.919 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:48.919 "strip_size_kb": 64, 00:20:48.919 "state": "online", 00:20:48.919 "raid_level": "raid5f", 00:20:48.919 "superblock": false, 00:20:48.919 "num_base_bdevs": 4, 00:20:48.919 "num_base_bdevs_discovered": 3, 00:20:48.919 "num_base_bdevs_operational": 3, 00:20:48.919 "base_bdevs_list": [ 00:20:48.919 { 00:20:48.919 "name": null, 00:20:48.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.919 "is_configured": false, 00:20:48.919 "data_offset": 0, 00:20:48.919 "data_size": 65536 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "BaseBdev2", 00:20:48.919 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:48.919 "is_configured": true, 00:20:48.919 "data_offset": 0, 00:20:48.919 "data_size": 65536 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "BaseBdev3", 00:20:48.919 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:48.919 "is_configured": true, 00:20:48.919 "data_offset": 0, 00:20:48.919 "data_size": 65536 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "BaseBdev4", 00:20:48.919 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:48.919 "is_configured": true, 00:20:48.919 "data_offset": 0, 00:20:48.919 "data_size": 65536 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }' 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.919 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.487 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.488 "name": "raid_bdev1", 00:20:49.488 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:49.488 "strip_size_kb": 64, 00:20:49.488 "state": "online", 00:20:49.488 "raid_level": "raid5f", 00:20:49.488 "superblock": false, 00:20:49.488 "num_base_bdevs": 4, 00:20:49.488 "num_base_bdevs_discovered": 3, 00:20:49.488 "num_base_bdevs_operational": 3, 00:20:49.488 "base_bdevs_list": [ 00:20:49.488 { 00:20:49.488 "name": null, 00:20:49.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.488 "is_configured": false, 00:20:49.488 "data_offset": 0, 00:20:49.488 "data_size": 65536 00:20:49.488 }, 00:20:49.488 { 00:20:49.488 "name": "BaseBdev2", 00:20:49.488 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:49.488 "is_configured": true, 00:20:49.488 "data_offset": 0, 00:20:49.488 "data_size": 65536 00:20:49.488 }, 00:20:49.488 { 00:20:49.488 "name": "BaseBdev3", 00:20:49.488 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:49.488 "is_configured": true, 00:20:49.488 "data_offset": 0, 00:20:49.488 "data_size": 65536 00:20:49.488 }, 00:20:49.488 { 00:20:49.488 "name": "BaseBdev4", 00:20:49.488 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:49.488 "is_configured": true, 00:20:49.488 "data_offset": 0, 00:20:49.488 "data_size": 65536 00:20:49.488 } 00:20:49.488 ] 00:20:49.488 }' 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.488 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.747 [2024-12-05 19:40:42.965234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.747 [2024-12-05 19:40:42.979123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.747 19:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:49.747 [2024-12-05 19:40:42.988098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.685 19:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.685 19:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.685 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.685 "name": "raid_bdev1", 00:20:50.685 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:50.685 "strip_size_kb": 64, 00:20:50.685 "state": "online", 00:20:50.685 "raid_level": "raid5f", 00:20:50.685 "superblock": false, 00:20:50.685 "num_base_bdevs": 4, 00:20:50.685 "num_base_bdevs_discovered": 4, 00:20:50.685 "num_base_bdevs_operational": 4, 00:20:50.685 "process": { 00:20:50.685 "type": "rebuild", 00:20:50.685 "target": "spare", 00:20:50.685 "progress": { 00:20:50.685 "blocks": 17280, 00:20:50.685 "percent": 8 00:20:50.685 } 00:20:50.685 }, 00:20:50.685 "base_bdevs_list": [ 00:20:50.685 { 00:20:50.685 "name": "spare", 00:20:50.685 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:50.685 "is_configured": true, 00:20:50.685 "data_offset": 0, 00:20:50.685 "data_size": 65536 00:20:50.685 }, 00:20:50.685 { 00:20:50.685 "name": "BaseBdev2", 00:20:50.685 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:50.685 "is_configured": true, 00:20:50.685 "data_offset": 0, 00:20:50.685 "data_size": 65536 00:20:50.685 }, 00:20:50.685 { 00:20:50.685 "name": "BaseBdev3", 00:20:50.685 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:50.685 "is_configured": true, 00:20:50.685 "data_offset": 0, 00:20:50.685 "data_size": 65536 00:20:50.685 }, 00:20:50.685 { 00:20:50.685 "name": "BaseBdev4", 00:20:50.685 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:50.685 "is_configured": true, 00:20:50.685 "data_offset": 0, 00:20:50.685 "data_size": 65536 00:20:50.685 } 00:20:50.685 ] 00:20:50.686 }' 00:20:50.686 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.686 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.686 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=678 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.945 "name": "raid_bdev1", 00:20:50.945 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:50.945 "strip_size_kb": 64, 00:20:50.945 "state": "online", 00:20:50.945 "raid_level": "raid5f", 00:20:50.945 "superblock": false, 00:20:50.945 "num_base_bdevs": 4, 00:20:50.945 "num_base_bdevs_discovered": 4, 00:20:50.945 "num_base_bdevs_operational": 4, 00:20:50.945 "process": { 00:20:50.945 "type": "rebuild", 00:20:50.945 "target": "spare", 00:20:50.945 "progress": { 00:20:50.945 "blocks": 21120, 00:20:50.945 "percent": 10 00:20:50.945 } 00:20:50.945 }, 00:20:50.945 "base_bdevs_list": [ 00:20:50.945 { 00:20:50.945 "name": "spare", 00:20:50.945 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:50.945 "is_configured": true, 00:20:50.945 "data_offset": 0, 00:20:50.945 "data_size": 65536 00:20:50.945 }, 00:20:50.945 { 00:20:50.945 "name": "BaseBdev2", 00:20:50.945 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:50.945 "is_configured": true, 00:20:50.945 "data_offset": 0, 00:20:50.945 "data_size": 65536 00:20:50.945 }, 00:20:50.945 { 00:20:50.945 "name": "BaseBdev3", 00:20:50.945 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:50.945 "is_configured": true, 00:20:50.945 "data_offset": 0, 00:20:50.945 "data_size": 65536 00:20:50.945 }, 00:20:50.945 { 00:20:50.945 "name": "BaseBdev4", 00:20:50.945 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:50.945 "is_configured": true, 00:20:50.945 "data_offset": 0, 00:20:50.945 "data_size": 65536 00:20:50.945 } 00:20:50.945 ] 00:20:50.945 }' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.945 19:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.900 19:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.158 19:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.158 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.158 "name": "raid_bdev1", 00:20:52.158 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:52.158 "strip_size_kb": 64, 00:20:52.158 "state": "online", 00:20:52.158 "raid_level": "raid5f", 00:20:52.158 "superblock": false, 00:20:52.159 "num_base_bdevs": 4, 00:20:52.159 "num_base_bdevs_discovered": 4, 00:20:52.159 "num_base_bdevs_operational": 4, 00:20:52.159 "process": { 00:20:52.159 "type": "rebuild", 00:20:52.159 "target": "spare", 00:20:52.159 "progress": { 00:20:52.159 "blocks": 44160, 00:20:52.159 "percent": 22 00:20:52.159 } 00:20:52.159 }, 00:20:52.159 "base_bdevs_list": [ 00:20:52.159 { 00:20:52.159 "name": "spare", 00:20:52.159 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:52.159 "is_configured": true, 00:20:52.159 "data_offset": 0, 00:20:52.159 "data_size": 65536 00:20:52.159 }, 00:20:52.159 { 00:20:52.159 "name": "BaseBdev2", 00:20:52.159 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:52.159 "is_configured": true, 00:20:52.159 "data_offset": 0, 00:20:52.159 "data_size": 65536 00:20:52.159 }, 00:20:52.159 { 00:20:52.159 "name": "BaseBdev3", 00:20:52.159 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:52.159 "is_configured": true, 00:20:52.159 "data_offset": 0, 00:20:52.159 "data_size": 65536 00:20:52.159 }, 00:20:52.159 { 00:20:52.159 "name": "BaseBdev4", 00:20:52.159 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:52.159 "is_configured": true, 00:20:52.159 "data_offset": 0, 00:20:52.159 "data_size": 65536 00:20:52.159 } 00:20:52.159 ] 00:20:52.159 }' 00:20:52.159 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.159 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.159 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.159 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.159 19:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.094 19:40:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.353 "name": "raid_bdev1", 00:20:53.353 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:53.353 "strip_size_kb": 64, 00:20:53.353 "state": "online", 00:20:53.353 "raid_level": "raid5f", 00:20:53.353 "superblock": false, 00:20:53.353 "num_base_bdevs": 4, 00:20:53.353 "num_base_bdevs_discovered": 4, 00:20:53.353 "num_base_bdevs_operational": 4, 00:20:53.353 "process": { 00:20:53.353 "type": "rebuild", 00:20:53.353 "target": "spare", 00:20:53.353 "progress": { 00:20:53.353 "blocks": 65280, 00:20:53.353 "percent": 33 00:20:53.353 } 00:20:53.353 }, 00:20:53.353 "base_bdevs_list": [ 00:20:53.353 { 00:20:53.353 "name": "spare", 00:20:53.353 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:53.353 "is_configured": true, 00:20:53.353 "data_offset": 0, 00:20:53.353 "data_size": 65536 00:20:53.353 }, 00:20:53.353 { 00:20:53.353 "name": "BaseBdev2", 00:20:53.353 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:53.353 "is_configured": true, 00:20:53.353 "data_offset": 0, 00:20:53.353 "data_size": 65536 00:20:53.353 }, 00:20:53.353 { 00:20:53.353 "name": "BaseBdev3", 00:20:53.353 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:53.353 "is_configured": true, 00:20:53.353 "data_offset": 0, 00:20:53.353 "data_size": 65536 00:20:53.353 }, 00:20:53.353 { 00:20:53.353 "name": "BaseBdev4", 00:20:53.353 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:53.353 "is_configured": true, 00:20:53.353 "data_offset": 0, 00:20:53.353 "data_size": 65536 00:20:53.353 } 00:20:53.353 ] 00:20:53.353 }' 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.353 19:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.290 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.290 "name": "raid_bdev1", 00:20:54.290 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:54.290 "strip_size_kb": 64, 00:20:54.290 "state": "online", 00:20:54.290 "raid_level": "raid5f", 00:20:54.290 "superblock": false, 00:20:54.290 "num_base_bdevs": 4, 00:20:54.290 "num_base_bdevs_discovered": 4, 00:20:54.290 "num_base_bdevs_operational": 4, 00:20:54.291 "process": { 00:20:54.291 "type": "rebuild", 00:20:54.291 "target": "spare", 00:20:54.291 "progress": { 00:20:54.291 "blocks": 88320, 00:20:54.291 "percent": 44 00:20:54.291 } 00:20:54.291 }, 00:20:54.291 "base_bdevs_list": [ 00:20:54.291 { 00:20:54.291 "name": "spare", 00:20:54.291 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:54.291 "is_configured": true, 00:20:54.291 "data_offset": 0, 00:20:54.291 "data_size": 65536 00:20:54.291 }, 00:20:54.291 { 00:20:54.291 "name": "BaseBdev2", 00:20:54.291 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:54.291 "is_configured": true, 00:20:54.291 "data_offset": 0, 00:20:54.291 "data_size": 65536 00:20:54.291 }, 00:20:54.291 { 00:20:54.291 "name": "BaseBdev3", 00:20:54.291 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:54.291 "is_configured": true, 00:20:54.291 "data_offset": 0, 00:20:54.291 "data_size": 65536 00:20:54.291 }, 00:20:54.291 { 00:20:54.291 "name": "BaseBdev4", 00:20:54.291 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:54.291 "is_configured": true, 00:20:54.291 "data_offset": 0, 00:20:54.291 "data_size": 65536 00:20:54.291 } 00:20:54.291 ] 00:20:54.291 }' 00:20:54.291 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.550 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.550 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.550 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.550 19:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.487 "name": "raid_bdev1", 00:20:55.487 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:55.487 "strip_size_kb": 64, 00:20:55.487 "state": "online", 00:20:55.487 "raid_level": "raid5f", 00:20:55.487 "superblock": false, 00:20:55.487 "num_base_bdevs": 4, 00:20:55.487 "num_base_bdevs_discovered": 4, 00:20:55.487 "num_base_bdevs_operational": 4, 00:20:55.487 "process": { 00:20:55.487 "type": "rebuild", 00:20:55.487 "target": "spare", 00:20:55.487 "progress": { 00:20:55.487 "blocks": 109440, 00:20:55.487 "percent": 55 00:20:55.487 } 00:20:55.487 }, 00:20:55.487 "base_bdevs_list": [ 00:20:55.487 { 00:20:55.487 "name": "spare", 00:20:55.487 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:55.487 "is_configured": true, 00:20:55.487 "data_offset": 0, 00:20:55.487 "data_size": 65536 00:20:55.487 }, 00:20:55.487 { 00:20:55.487 "name": "BaseBdev2", 00:20:55.487 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:55.487 "is_configured": true, 00:20:55.487 "data_offset": 0, 00:20:55.487 "data_size": 65536 00:20:55.487 }, 00:20:55.487 { 00:20:55.487 "name": "BaseBdev3", 00:20:55.487 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:55.487 "is_configured": true, 00:20:55.487 "data_offset": 0, 00:20:55.487 "data_size": 65536 00:20:55.487 }, 00:20:55.487 { 00:20:55.487 "name": "BaseBdev4", 00:20:55.487 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:55.487 "is_configured": true, 00:20:55.487 "data_offset": 0, 00:20:55.487 "data_size": 65536 00:20:55.487 } 00:20:55.487 ] 00:20:55.487 }' 00:20:55.487 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.746 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.746 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.746 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.746 19:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.682 19:40:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.682 19:40:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.682 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.682 "name": "raid_bdev1", 00:20:56.682 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:56.682 "strip_size_kb": 64, 00:20:56.682 "state": "online", 00:20:56.682 "raid_level": "raid5f", 00:20:56.682 "superblock": false, 00:20:56.682 "num_base_bdevs": 4, 00:20:56.682 "num_base_bdevs_discovered": 4, 00:20:56.682 "num_base_bdevs_operational": 4, 00:20:56.682 "process": { 00:20:56.682 "type": "rebuild", 00:20:56.682 "target": "spare", 00:20:56.682 "progress": { 00:20:56.682 "blocks": 132480, 00:20:56.682 "percent": 67 00:20:56.682 } 00:20:56.682 }, 00:20:56.682 "base_bdevs_list": [ 00:20:56.682 { 00:20:56.682 "name": "spare", 00:20:56.682 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:56.682 "is_configured": true, 00:20:56.682 "data_offset": 0, 00:20:56.682 "data_size": 65536 00:20:56.682 }, 00:20:56.682 { 00:20:56.682 "name": "BaseBdev2", 00:20:56.682 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:56.682 "is_configured": true, 00:20:56.682 "data_offset": 0, 00:20:56.682 "data_size": 65536 00:20:56.682 }, 00:20:56.682 { 00:20:56.682 "name": "BaseBdev3", 00:20:56.682 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:56.682 "is_configured": true, 00:20:56.682 "data_offset": 0, 00:20:56.682 "data_size": 65536 00:20:56.682 }, 00:20:56.682 { 00:20:56.682 "name": "BaseBdev4", 00:20:56.682 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:56.682 "is_configured": true, 00:20:56.682 "data_offset": 0, 00:20:56.682 "data_size": 65536 00:20:56.682 } 00:20:56.682 ] 00:20:56.682 }' 00:20:56.682 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.682 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.682 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.941 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.941 19:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.877 "name": "raid_bdev1", 00:20:57.877 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:57.877 "strip_size_kb": 64, 00:20:57.877 "state": "online", 00:20:57.877 "raid_level": "raid5f", 00:20:57.877 "superblock": false, 00:20:57.877 "num_base_bdevs": 4, 00:20:57.877 "num_base_bdevs_discovered": 4, 00:20:57.877 "num_base_bdevs_operational": 4, 00:20:57.877 "process": { 00:20:57.877 "type": "rebuild", 00:20:57.877 "target": "spare", 00:20:57.877 "progress": { 00:20:57.877 "blocks": 153600, 00:20:57.877 "percent": 78 00:20:57.877 } 00:20:57.877 }, 00:20:57.877 "base_bdevs_list": [ 00:20:57.877 { 00:20:57.877 "name": "spare", 00:20:57.877 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:57.877 "is_configured": true, 00:20:57.877 "data_offset": 0, 00:20:57.877 "data_size": 65536 00:20:57.877 }, 00:20:57.877 { 00:20:57.877 "name": "BaseBdev2", 00:20:57.877 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:57.877 "is_configured": true, 00:20:57.877 "data_offset": 0, 00:20:57.877 "data_size": 65536 00:20:57.877 }, 00:20:57.877 { 00:20:57.877 "name": "BaseBdev3", 00:20:57.877 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:57.877 "is_configured": true, 00:20:57.877 "data_offset": 0, 00:20:57.877 "data_size": 65536 00:20:57.877 }, 00:20:57.877 { 00:20:57.877 "name": "BaseBdev4", 00:20:57.877 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:57.877 "is_configured": true, 00:20:57.877 "data_offset": 0, 00:20:57.877 "data_size": 65536 00:20:57.877 } 00:20:57.877 ] 00:20:57.877 }' 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.877 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.136 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.136 19:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.069 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.070 "name": "raid_bdev1", 00:20:59.070 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:20:59.070 "strip_size_kb": 64, 00:20:59.070 "state": "online", 00:20:59.070 "raid_level": "raid5f", 00:20:59.070 "superblock": false, 00:20:59.070 "num_base_bdevs": 4, 00:20:59.070 "num_base_bdevs_discovered": 4, 00:20:59.070 "num_base_bdevs_operational": 4, 00:20:59.070 "process": { 00:20:59.070 "type": "rebuild", 00:20:59.070 "target": "spare", 00:20:59.070 "progress": { 00:20:59.070 "blocks": 176640, 00:20:59.070 "percent": 89 00:20:59.070 } 00:20:59.070 }, 00:20:59.070 "base_bdevs_list": [ 00:20:59.070 { 00:20:59.070 "name": "spare", 00:20:59.070 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:20:59.070 "is_configured": true, 00:20:59.070 "data_offset": 0, 00:20:59.070 "data_size": 65536 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "name": "BaseBdev2", 00:20:59.070 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:20:59.070 "is_configured": true, 00:20:59.070 "data_offset": 0, 00:20:59.070 "data_size": 65536 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "name": "BaseBdev3", 00:20:59.070 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:20:59.070 "is_configured": true, 00:20:59.070 "data_offset": 0, 00:20:59.070 "data_size": 65536 00:20:59.070 }, 00:20:59.070 { 00:20:59.070 "name": "BaseBdev4", 00:20:59.070 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:20:59.070 "is_configured": true, 00:20:59.070 "data_offset": 0, 00:20:59.070 "data_size": 65536 00:20:59.070 } 00:20:59.070 ] 00:20:59.070 }' 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.070 19:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.006 [2024-12-05 19:40:53.394507] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:00.006 [2024-12-05 19:40:53.394583] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:00.006 [2024-12-05 19:40:53.394656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.265 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.265 "name": "raid_bdev1", 00:21:00.265 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:21:00.266 "strip_size_kb": 64, 00:21:00.266 "state": "online", 00:21:00.266 "raid_level": "raid5f", 00:21:00.266 "superblock": false, 00:21:00.266 "num_base_bdevs": 4, 00:21:00.266 "num_base_bdevs_discovered": 4, 00:21:00.266 "num_base_bdevs_operational": 4, 00:21:00.266 "base_bdevs_list": [ 00:21:00.266 { 00:21:00.266 "name": "spare", 00:21:00.266 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev2", 00:21:00.266 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev3", 00:21:00.266 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev4", 00:21:00.266 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 } 00:21:00.266 ] 00:21:00.266 }' 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.266 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.266 "name": "raid_bdev1", 00:21:00.266 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:21:00.266 "strip_size_kb": 64, 00:21:00.266 "state": "online", 00:21:00.266 "raid_level": "raid5f", 00:21:00.266 "superblock": false, 00:21:00.266 "num_base_bdevs": 4, 00:21:00.266 "num_base_bdevs_discovered": 4, 00:21:00.266 "num_base_bdevs_operational": 4, 00:21:00.266 "base_bdevs_list": [ 00:21:00.266 { 00:21:00.266 "name": "spare", 00:21:00.266 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev2", 00:21:00.266 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev3", 00:21:00.266 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 }, 00:21:00.266 { 00:21:00.266 "name": "BaseBdev4", 00:21:00.266 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:21:00.266 "is_configured": true, 00:21:00.266 "data_offset": 0, 00:21:00.266 "data_size": 65536 00:21:00.266 } 00:21:00.266 ] 00:21:00.266 }' 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.525 "name": "raid_bdev1", 00:21:00.525 "uuid": "b84feb25-5a48-45d8-9993-48a0ba1ba679", 00:21:00.525 "strip_size_kb": 64, 00:21:00.525 "state": "online", 00:21:00.525 "raid_level": "raid5f", 00:21:00.525 "superblock": false, 00:21:00.525 "num_base_bdevs": 4, 00:21:00.525 "num_base_bdevs_discovered": 4, 00:21:00.525 "num_base_bdevs_operational": 4, 00:21:00.525 "base_bdevs_list": [ 00:21:00.525 { 00:21:00.525 "name": "spare", 00:21:00.525 "uuid": "153b492f-87d8-5836-8666-fbd296c82306", 00:21:00.525 "is_configured": true, 00:21:00.525 "data_offset": 0, 00:21:00.525 "data_size": 65536 00:21:00.525 }, 00:21:00.525 { 00:21:00.525 "name": "BaseBdev2", 00:21:00.525 "uuid": "2d4de49a-fc91-5d8e-b81b-0ad0df3eee81", 00:21:00.525 "is_configured": true, 00:21:00.525 "data_offset": 0, 00:21:00.525 "data_size": 65536 00:21:00.525 }, 00:21:00.525 { 00:21:00.525 "name": "BaseBdev3", 00:21:00.525 "uuid": "b19c3c4b-eba1-59d3-9a60-2804d6d99986", 00:21:00.525 "is_configured": true, 00:21:00.525 "data_offset": 0, 00:21:00.525 "data_size": 65536 00:21:00.525 }, 00:21:00.525 { 00:21:00.525 "name": "BaseBdev4", 00:21:00.525 "uuid": "36d18e00-53f4-56a4-8953-021cc91cb8d6", 00:21:00.525 "is_configured": true, 00:21:00.525 "data_offset": 0, 00:21:00.525 "data_size": 65536 00:21:00.525 } 00:21:00.525 ] 00:21:00.525 }' 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.525 19:40:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.095 [2024-12-05 19:40:54.363178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.095 [2024-12-05 19:40:54.363362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.095 [2024-12-05 19:40:54.363494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.095 [2024-12-05 19:40:54.363630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.095 [2024-12-05 19:40:54.363662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.095 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:01.355 /dev/nbd0 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.355 1+0 records in 00:21:01.355 1+0 records out 00:21:01.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346336 s, 11.8 MB/s 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.355 19:40:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:01.614 /dev/nbd1 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.614 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.873 1+0 records in 00:21:01.873 1+0 records out 00:21:01.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354471 s, 11.6 MB/s 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.873 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85037 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85037 ']' 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85037 00:21:02.440 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85037 00:21:02.700 killing process with pid 85037 00:21:02.700 Received shutdown signal, test time was about 60.000000 seconds 00:21:02.700 00:21:02.700 Latency(us) 00:21:02.700 [2024-12-05T19:40:56.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.700 [2024-12-05T19:40:56.141Z] =================================================================================================================== 00:21:02.700 [2024-12-05T19:40:56.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85037' 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85037 00:21:02.700 19:40:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85037 00:21:02.700 [2024-12-05 19:40:55.904793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.960 [2024-12-05 19:40:56.347644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:04.336 ************************************ 00:21:04.336 END TEST raid5f_rebuild_test 00:21:04.336 ************************************ 00:21:04.336 00:21:04.336 real 0m20.385s 00:21:04.336 user 0m25.455s 00:21:04.336 sys 0m2.316s 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.336 19:40:57 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:04.336 19:40:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:04.336 19:40:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.336 19:40:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.336 ************************************ 00:21:04.336 START TEST raid5f_rebuild_test_sb 00:21:04.336 ************************************ 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85547 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85547 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85547 ']' 00:21:04.336 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.337 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.337 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.337 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.337 19:40:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.337 Zero copy mechanism will not be used. 00:21:04.337 [2024-12-05 19:40:57.588590] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:04.337 [2024-12-05 19:40:57.588810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85547 ] 00:21:04.337 [2024-12-05 19:40:57.776024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.594 [2024-12-05 19:40:57.911207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.852 [2024-12-05 19:40:58.119661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.852 [2024-12-05 19:40:58.119707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.111 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 BaseBdev1_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 [2024-12-05 19:40:58.589843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.369 [2024-12-05 19:40:58.589918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.369 [2024-12-05 19:40:58.589996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:05.369 [2024-12-05 19:40:58.590015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.369 [2024-12-05 19:40:58.592849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.369 [2024-12-05 19:40:58.593060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.369 BaseBdev1 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 BaseBdev2_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 [2024-12-05 19:40:58.643213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:05.369 [2024-12-05 19:40:58.643331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.369 [2024-12-05 19:40:58.643363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:05.369 [2024-12-05 19:40:58.643379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.369 [2024-12-05 19:40:58.646304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.369 [2024-12-05 19:40:58.646553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.369 BaseBdev2 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 BaseBdev3_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 [2024-12-05 19:40:58.705801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:05.369 [2024-12-05 19:40:58.705870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.369 [2024-12-05 19:40:58.705902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:05.369 [2024-12-05 19:40:58.705920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.369 [2024-12-05 19:40:58.708688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.369 [2024-12-05 19:40:58.708909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:05.369 BaseBdev3 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.369 BaseBdev4_malloc 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.369 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.370 [2024-12-05 19:40:58.758980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:05.370 [2024-12-05 19:40:58.759051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.370 [2024-12-05 19:40:58.759081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:05.370 [2024-12-05 19:40:58.759097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.370 [2024-12-05 19:40:58.761899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.370 [2024-12-05 19:40:58.762128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:05.370 BaseBdev4 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.370 spare_malloc 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.370 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.628 spare_delay 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.628 [2024-12-05 19:40:58.822753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.628 [2024-12-05 19:40:58.822843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.628 [2024-12-05 19:40:58.822873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:05.628 [2024-12-05 19:40:58.822912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.628 [2024-12-05 19:40:58.825812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.628 [2024-12-05 19:40:58.825889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.628 spare 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.628 [2024-12-05 19:40:58.830906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.628 [2024-12-05 19:40:58.833683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.628 [2024-12-05 19:40:58.833917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.628 [2024-12-05 19:40:58.834059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:05.628 [2024-12-05 19:40:58.834382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:05.628 [2024-12-05 19:40:58.834442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:05.628 [2024-12-05 19:40:58.834951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:05.628 [2024-12-05 19:40:58.842226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:05.628 [2024-12-05 19:40:58.842380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:05.628 [2024-12-05 19:40:58.842829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.628 "name": "raid_bdev1", 00:21:05.628 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:05.628 "strip_size_kb": 64, 00:21:05.628 "state": "online", 00:21:05.628 "raid_level": "raid5f", 00:21:05.628 "superblock": true, 00:21:05.628 "num_base_bdevs": 4, 00:21:05.628 "num_base_bdevs_discovered": 4, 00:21:05.628 "num_base_bdevs_operational": 4, 00:21:05.628 "base_bdevs_list": [ 00:21:05.628 { 00:21:05.628 "name": "BaseBdev1", 00:21:05.628 "uuid": "793025fb-cd18-5fb8-92db-803476beb319", 00:21:05.628 "is_configured": true, 00:21:05.628 "data_offset": 2048, 00:21:05.628 "data_size": 63488 00:21:05.628 }, 00:21:05.628 { 00:21:05.628 "name": "BaseBdev2", 00:21:05.628 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:05.628 "is_configured": true, 00:21:05.628 "data_offset": 2048, 00:21:05.628 "data_size": 63488 00:21:05.628 }, 00:21:05.628 { 00:21:05.628 "name": "BaseBdev3", 00:21:05.628 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:05.628 "is_configured": true, 00:21:05.628 "data_offset": 2048, 00:21:05.628 "data_size": 63488 00:21:05.628 }, 00:21:05.628 { 00:21:05.628 "name": "BaseBdev4", 00:21:05.628 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:05.628 "is_configured": true, 00:21:05.628 "data_offset": 2048, 00:21:05.628 "data_size": 63488 00:21:05.628 } 00:21:05.628 ] 00:21:05.628 }' 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.628 19:40:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.195 [2024-12-05 19:40:59.375074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:06.195 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.196 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:06.454 [2024-12-05 19:40:59.771030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:06.454 /dev/nbd0 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.454 1+0 records in 00:21:06.454 1+0 records out 00:21:06.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256539 s, 16.0 MB/s 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.454 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.455 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:06.455 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:06.455 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:06.455 19:40:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:07.389 496+0 records in 00:21:07.389 496+0 records out 00:21:07.389 97517568 bytes (98 MB, 93 MiB) copied, 0.691534 s, 141 MB/s 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:07.389 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:07.389 [2024-12-05 19:41:00.829567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:07.648 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.649 [2024-12-05 19:41:00.841479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.649 "name": "raid_bdev1", 00:21:07.649 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:07.649 "strip_size_kb": 64, 00:21:07.649 "state": "online", 00:21:07.649 "raid_level": "raid5f", 00:21:07.649 "superblock": true, 00:21:07.649 "num_base_bdevs": 4, 00:21:07.649 "num_base_bdevs_discovered": 3, 00:21:07.649 "num_base_bdevs_operational": 3, 00:21:07.649 "base_bdevs_list": [ 00:21:07.649 { 00:21:07.649 "name": null, 00:21:07.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.649 "is_configured": false, 00:21:07.649 "data_offset": 0, 00:21:07.649 "data_size": 63488 00:21:07.649 }, 00:21:07.649 { 00:21:07.649 "name": "BaseBdev2", 00:21:07.649 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:07.649 "is_configured": true, 00:21:07.649 "data_offset": 2048, 00:21:07.649 "data_size": 63488 00:21:07.649 }, 00:21:07.649 { 00:21:07.649 "name": "BaseBdev3", 00:21:07.649 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:07.649 "is_configured": true, 00:21:07.649 "data_offset": 2048, 00:21:07.649 "data_size": 63488 00:21:07.649 }, 00:21:07.649 { 00:21:07.649 "name": "BaseBdev4", 00:21:07.649 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:07.649 "is_configured": true, 00:21:07.649 "data_offset": 2048, 00:21:07.649 "data_size": 63488 00:21:07.649 } 00:21:07.649 ] 00:21:07.649 }' 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.649 19:41:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.216 19:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.216 19:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.216 19:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.216 [2024-12-05 19:41:01.365654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.216 [2024-12-05 19:41:01.380661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:08.216 19:41:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.216 19:41:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:08.216 [2024-12-05 19:41:01.390209] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.172 "name": "raid_bdev1", 00:21:09.172 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:09.172 "strip_size_kb": 64, 00:21:09.172 "state": "online", 00:21:09.172 "raid_level": "raid5f", 00:21:09.172 "superblock": true, 00:21:09.172 "num_base_bdevs": 4, 00:21:09.172 "num_base_bdevs_discovered": 4, 00:21:09.172 "num_base_bdevs_operational": 4, 00:21:09.172 "process": { 00:21:09.172 "type": "rebuild", 00:21:09.172 "target": "spare", 00:21:09.172 "progress": { 00:21:09.172 "blocks": 17280, 00:21:09.172 "percent": 9 00:21:09.172 } 00:21:09.172 }, 00:21:09.172 "base_bdevs_list": [ 00:21:09.172 { 00:21:09.172 "name": "spare", 00:21:09.172 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:09.172 "is_configured": true, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 }, 00:21:09.172 { 00:21:09.172 "name": "BaseBdev2", 00:21:09.172 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:09.172 "is_configured": true, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 }, 00:21:09.172 { 00:21:09.172 "name": "BaseBdev3", 00:21:09.172 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:09.172 "is_configured": true, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 }, 00:21:09.172 { 00:21:09.172 "name": "BaseBdev4", 00:21:09.172 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:09.172 "is_configured": true, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 } 00:21:09.172 ] 00:21:09.172 }' 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.172 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.172 [2024-12-05 19:41:02.555996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.172 [2024-12-05 19:41:02.603102] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:09.172 [2024-12-05 19:41:02.603208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.172 [2024-12-05 19:41:02.603233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.172 [2024-12-05 19:41:02.603247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.431 "name": "raid_bdev1", 00:21:09.431 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:09.431 "strip_size_kb": 64, 00:21:09.431 "state": "online", 00:21:09.431 "raid_level": "raid5f", 00:21:09.431 "superblock": true, 00:21:09.431 "num_base_bdevs": 4, 00:21:09.431 "num_base_bdevs_discovered": 3, 00:21:09.431 "num_base_bdevs_operational": 3, 00:21:09.431 "base_bdevs_list": [ 00:21:09.431 { 00:21:09.431 "name": null, 00:21:09.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.431 "is_configured": false, 00:21:09.431 "data_offset": 0, 00:21:09.431 "data_size": 63488 00:21:09.431 }, 00:21:09.431 { 00:21:09.431 "name": "BaseBdev2", 00:21:09.431 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:09.431 "is_configured": true, 00:21:09.431 "data_offset": 2048, 00:21:09.431 "data_size": 63488 00:21:09.431 }, 00:21:09.431 { 00:21:09.431 "name": "BaseBdev3", 00:21:09.431 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:09.431 "is_configured": true, 00:21:09.431 "data_offset": 2048, 00:21:09.431 "data_size": 63488 00:21:09.431 }, 00:21:09.431 { 00:21:09.431 "name": "BaseBdev4", 00:21:09.431 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:09.431 "is_configured": true, 00:21:09.431 "data_offset": 2048, 00:21:09.431 "data_size": 63488 00:21:09.431 } 00:21:09.431 ] 00:21:09.431 }' 00:21:09.431 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.432 19:41:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.000 "name": "raid_bdev1", 00:21:10.000 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:10.000 "strip_size_kb": 64, 00:21:10.000 "state": "online", 00:21:10.000 "raid_level": "raid5f", 00:21:10.000 "superblock": true, 00:21:10.000 "num_base_bdevs": 4, 00:21:10.000 "num_base_bdevs_discovered": 3, 00:21:10.000 "num_base_bdevs_operational": 3, 00:21:10.000 "base_bdevs_list": [ 00:21:10.000 { 00:21:10.000 "name": null, 00:21:10.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.000 "is_configured": false, 00:21:10.000 "data_offset": 0, 00:21:10.000 "data_size": 63488 00:21:10.000 }, 00:21:10.000 { 00:21:10.000 "name": "BaseBdev2", 00:21:10.000 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:10.000 "is_configured": true, 00:21:10.000 "data_offset": 2048, 00:21:10.000 "data_size": 63488 00:21:10.000 }, 00:21:10.000 { 00:21:10.000 "name": "BaseBdev3", 00:21:10.000 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:10.000 "is_configured": true, 00:21:10.000 "data_offset": 2048, 00:21:10.000 "data_size": 63488 00:21:10.000 }, 00:21:10.000 { 00:21:10.000 "name": "BaseBdev4", 00:21:10.000 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:10.000 "is_configured": true, 00:21:10.000 "data_offset": 2048, 00:21:10.000 "data_size": 63488 00:21:10.000 } 00:21:10.000 ] 00:21:10.000 }' 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.000 [2024-12-05 19:41:03.295445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.000 [2024-12-05 19:41:03.308754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.000 19:41:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:10.000 [2024-12-05 19:41:03.317522] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.936 "name": "raid_bdev1", 00:21:10.936 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:10.936 "strip_size_kb": 64, 00:21:10.936 "state": "online", 00:21:10.936 "raid_level": "raid5f", 00:21:10.936 "superblock": true, 00:21:10.936 "num_base_bdevs": 4, 00:21:10.936 "num_base_bdevs_discovered": 4, 00:21:10.936 "num_base_bdevs_operational": 4, 00:21:10.936 "process": { 00:21:10.936 "type": "rebuild", 00:21:10.936 "target": "spare", 00:21:10.936 "progress": { 00:21:10.936 "blocks": 17280, 00:21:10.936 "percent": 9 00:21:10.936 } 00:21:10.936 }, 00:21:10.936 "base_bdevs_list": [ 00:21:10.936 { 00:21:10.936 "name": "spare", 00:21:10.936 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:10.936 "is_configured": true, 00:21:10.936 "data_offset": 2048, 00:21:10.936 "data_size": 63488 00:21:10.936 }, 00:21:10.936 { 00:21:10.936 "name": "BaseBdev2", 00:21:10.936 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:10.936 "is_configured": true, 00:21:10.936 "data_offset": 2048, 00:21:10.936 "data_size": 63488 00:21:10.936 }, 00:21:10.936 { 00:21:10.936 "name": "BaseBdev3", 00:21:10.936 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:10.936 "is_configured": true, 00:21:10.936 "data_offset": 2048, 00:21:10.936 "data_size": 63488 00:21:10.936 }, 00:21:10.936 { 00:21:10.936 "name": "BaseBdev4", 00:21:10.936 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:10.936 "is_configured": true, 00:21:10.936 "data_offset": 2048, 00:21:10.936 "data_size": 63488 00:21:10.936 } 00:21:10.936 ] 00:21:10.936 }' 00:21:10.936 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:11.195 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=698 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.195 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.195 "name": "raid_bdev1", 00:21:11.195 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:11.195 "strip_size_kb": 64, 00:21:11.195 "state": "online", 00:21:11.195 "raid_level": "raid5f", 00:21:11.195 "superblock": true, 00:21:11.195 "num_base_bdevs": 4, 00:21:11.195 "num_base_bdevs_discovered": 4, 00:21:11.195 "num_base_bdevs_operational": 4, 00:21:11.195 "process": { 00:21:11.195 "type": "rebuild", 00:21:11.195 "target": "spare", 00:21:11.195 "progress": { 00:21:11.195 "blocks": 21120, 00:21:11.195 "percent": 11 00:21:11.195 } 00:21:11.195 }, 00:21:11.195 "base_bdevs_list": [ 00:21:11.195 { 00:21:11.195 "name": "spare", 00:21:11.195 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:11.195 "is_configured": true, 00:21:11.195 "data_offset": 2048, 00:21:11.195 "data_size": 63488 00:21:11.195 }, 00:21:11.195 { 00:21:11.195 "name": "BaseBdev2", 00:21:11.195 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:11.195 "is_configured": true, 00:21:11.195 "data_offset": 2048, 00:21:11.195 "data_size": 63488 00:21:11.195 }, 00:21:11.195 { 00:21:11.195 "name": "BaseBdev3", 00:21:11.195 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:11.195 "is_configured": true, 00:21:11.195 "data_offset": 2048, 00:21:11.195 "data_size": 63488 00:21:11.195 }, 00:21:11.195 { 00:21:11.195 "name": "BaseBdev4", 00:21:11.195 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:11.195 "is_configured": true, 00:21:11.195 "data_offset": 2048, 00:21:11.195 "data_size": 63488 00:21:11.195 } 00:21:11.195 ] 00:21:11.195 }' 00:21:11.196 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.196 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.196 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.454 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.454 19:41:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.389 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.389 "name": "raid_bdev1", 00:21:12.389 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:12.389 "strip_size_kb": 64, 00:21:12.389 "state": "online", 00:21:12.389 "raid_level": "raid5f", 00:21:12.389 "superblock": true, 00:21:12.389 "num_base_bdevs": 4, 00:21:12.389 "num_base_bdevs_discovered": 4, 00:21:12.389 "num_base_bdevs_operational": 4, 00:21:12.389 "process": { 00:21:12.389 "type": "rebuild", 00:21:12.389 "target": "spare", 00:21:12.389 "progress": { 00:21:12.389 "blocks": 44160, 00:21:12.389 "percent": 23 00:21:12.389 } 00:21:12.389 }, 00:21:12.389 "base_bdevs_list": [ 00:21:12.389 { 00:21:12.389 "name": "spare", 00:21:12.389 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:12.389 "is_configured": true, 00:21:12.389 "data_offset": 2048, 00:21:12.389 "data_size": 63488 00:21:12.390 }, 00:21:12.390 { 00:21:12.390 "name": "BaseBdev2", 00:21:12.390 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:12.390 "is_configured": true, 00:21:12.390 "data_offset": 2048, 00:21:12.390 "data_size": 63488 00:21:12.390 }, 00:21:12.390 { 00:21:12.390 "name": "BaseBdev3", 00:21:12.390 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:12.390 "is_configured": true, 00:21:12.390 "data_offset": 2048, 00:21:12.390 "data_size": 63488 00:21:12.390 }, 00:21:12.390 { 00:21:12.390 "name": "BaseBdev4", 00:21:12.390 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:12.390 "is_configured": true, 00:21:12.390 "data_offset": 2048, 00:21:12.390 "data_size": 63488 00:21:12.390 } 00:21:12.390 ] 00:21:12.390 }' 00:21:12.390 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.390 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.390 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.647 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.647 19:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.583 "name": "raid_bdev1", 00:21:13.583 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:13.583 "strip_size_kb": 64, 00:21:13.583 "state": "online", 00:21:13.583 "raid_level": "raid5f", 00:21:13.583 "superblock": true, 00:21:13.583 "num_base_bdevs": 4, 00:21:13.583 "num_base_bdevs_discovered": 4, 00:21:13.583 "num_base_bdevs_operational": 4, 00:21:13.583 "process": { 00:21:13.583 "type": "rebuild", 00:21:13.583 "target": "spare", 00:21:13.583 "progress": { 00:21:13.583 "blocks": 65280, 00:21:13.583 "percent": 34 00:21:13.583 } 00:21:13.583 }, 00:21:13.583 "base_bdevs_list": [ 00:21:13.583 { 00:21:13.583 "name": "spare", 00:21:13.583 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:13.583 "is_configured": true, 00:21:13.583 "data_offset": 2048, 00:21:13.583 "data_size": 63488 00:21:13.583 }, 00:21:13.583 { 00:21:13.583 "name": "BaseBdev2", 00:21:13.583 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:13.583 "is_configured": true, 00:21:13.583 "data_offset": 2048, 00:21:13.583 "data_size": 63488 00:21:13.583 }, 00:21:13.583 { 00:21:13.583 "name": "BaseBdev3", 00:21:13.583 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:13.583 "is_configured": true, 00:21:13.583 "data_offset": 2048, 00:21:13.583 "data_size": 63488 00:21:13.583 }, 00:21:13.583 { 00:21:13.583 "name": "BaseBdev4", 00:21:13.583 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:13.583 "is_configured": true, 00:21:13.583 "data_offset": 2048, 00:21:13.583 "data_size": 63488 00:21:13.583 } 00:21:13.583 ] 00:21:13.583 }' 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.583 19:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.583 19:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.583 19:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.960 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.960 "name": "raid_bdev1", 00:21:14.960 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:14.960 "strip_size_kb": 64, 00:21:14.960 "state": "online", 00:21:14.960 "raid_level": "raid5f", 00:21:14.960 "superblock": true, 00:21:14.960 "num_base_bdevs": 4, 00:21:14.960 "num_base_bdevs_discovered": 4, 00:21:14.960 "num_base_bdevs_operational": 4, 00:21:14.960 "process": { 00:21:14.960 "type": "rebuild", 00:21:14.960 "target": "spare", 00:21:14.960 "progress": { 00:21:14.960 "blocks": 88320, 00:21:14.960 "percent": 46 00:21:14.960 } 00:21:14.960 }, 00:21:14.960 "base_bdevs_list": [ 00:21:14.960 { 00:21:14.960 "name": "spare", 00:21:14.960 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:14.961 "is_configured": true, 00:21:14.961 "data_offset": 2048, 00:21:14.961 "data_size": 63488 00:21:14.961 }, 00:21:14.961 { 00:21:14.961 "name": "BaseBdev2", 00:21:14.961 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:14.961 "is_configured": true, 00:21:14.961 "data_offset": 2048, 00:21:14.961 "data_size": 63488 00:21:14.961 }, 00:21:14.961 { 00:21:14.961 "name": "BaseBdev3", 00:21:14.961 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:14.961 "is_configured": true, 00:21:14.961 "data_offset": 2048, 00:21:14.961 "data_size": 63488 00:21:14.961 }, 00:21:14.961 { 00:21:14.961 "name": "BaseBdev4", 00:21:14.961 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:14.961 "is_configured": true, 00:21:14.961 "data_offset": 2048, 00:21:14.961 "data_size": 63488 00:21:14.961 } 00:21:14.961 ] 00:21:14.961 }' 00:21:14.961 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.961 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.961 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.961 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.961 19:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.897 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.897 "name": "raid_bdev1", 00:21:15.897 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:15.897 "strip_size_kb": 64, 00:21:15.897 "state": "online", 00:21:15.897 "raid_level": "raid5f", 00:21:15.897 "superblock": true, 00:21:15.897 "num_base_bdevs": 4, 00:21:15.897 "num_base_bdevs_discovered": 4, 00:21:15.897 "num_base_bdevs_operational": 4, 00:21:15.897 "process": { 00:21:15.897 "type": "rebuild", 00:21:15.897 "target": "spare", 00:21:15.897 "progress": { 00:21:15.897 "blocks": 111360, 00:21:15.897 "percent": 58 00:21:15.897 } 00:21:15.897 }, 00:21:15.897 "base_bdevs_list": [ 00:21:15.897 { 00:21:15.897 "name": "spare", 00:21:15.897 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:15.897 "is_configured": true, 00:21:15.897 "data_offset": 2048, 00:21:15.897 "data_size": 63488 00:21:15.897 }, 00:21:15.897 { 00:21:15.897 "name": "BaseBdev2", 00:21:15.898 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:15.898 "is_configured": true, 00:21:15.898 "data_offset": 2048, 00:21:15.898 "data_size": 63488 00:21:15.898 }, 00:21:15.898 { 00:21:15.898 "name": "BaseBdev3", 00:21:15.898 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:15.898 "is_configured": true, 00:21:15.898 "data_offset": 2048, 00:21:15.898 "data_size": 63488 00:21:15.898 }, 00:21:15.898 { 00:21:15.898 "name": "BaseBdev4", 00:21:15.898 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:15.898 "is_configured": true, 00:21:15.898 "data_offset": 2048, 00:21:15.898 "data_size": 63488 00:21:15.898 } 00:21:15.898 ] 00:21:15.898 }' 00:21:15.898 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.898 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.898 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.157 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.157 19:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.092 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.092 "name": "raid_bdev1", 00:21:17.092 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:17.092 "strip_size_kb": 64, 00:21:17.092 "state": "online", 00:21:17.092 "raid_level": "raid5f", 00:21:17.092 "superblock": true, 00:21:17.092 "num_base_bdevs": 4, 00:21:17.092 "num_base_bdevs_discovered": 4, 00:21:17.092 "num_base_bdevs_operational": 4, 00:21:17.092 "process": { 00:21:17.092 "type": "rebuild", 00:21:17.092 "target": "spare", 00:21:17.092 "progress": { 00:21:17.092 "blocks": 132480, 00:21:17.092 "percent": 69 00:21:17.092 } 00:21:17.092 }, 00:21:17.092 "base_bdevs_list": [ 00:21:17.092 { 00:21:17.092 "name": "spare", 00:21:17.092 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:17.092 "is_configured": true, 00:21:17.093 "data_offset": 2048, 00:21:17.093 "data_size": 63488 00:21:17.093 }, 00:21:17.093 { 00:21:17.093 "name": "BaseBdev2", 00:21:17.093 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:17.093 "is_configured": true, 00:21:17.093 "data_offset": 2048, 00:21:17.093 "data_size": 63488 00:21:17.093 }, 00:21:17.093 { 00:21:17.093 "name": "BaseBdev3", 00:21:17.093 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:17.093 "is_configured": true, 00:21:17.093 "data_offset": 2048, 00:21:17.093 "data_size": 63488 00:21:17.093 }, 00:21:17.093 { 00:21:17.093 "name": "BaseBdev4", 00:21:17.093 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:17.093 "is_configured": true, 00:21:17.093 "data_offset": 2048, 00:21:17.093 "data_size": 63488 00:21:17.093 } 00:21:17.093 ] 00:21:17.093 }' 00:21:17.093 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.093 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.093 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.093 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.093 19:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.466 "name": "raid_bdev1", 00:21:18.466 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:18.466 "strip_size_kb": 64, 00:21:18.466 "state": "online", 00:21:18.466 "raid_level": "raid5f", 00:21:18.466 "superblock": true, 00:21:18.466 "num_base_bdevs": 4, 00:21:18.466 "num_base_bdevs_discovered": 4, 00:21:18.466 "num_base_bdevs_operational": 4, 00:21:18.466 "process": { 00:21:18.466 "type": "rebuild", 00:21:18.466 "target": "spare", 00:21:18.466 "progress": { 00:21:18.466 "blocks": 155520, 00:21:18.466 "percent": 81 00:21:18.466 } 00:21:18.466 }, 00:21:18.466 "base_bdevs_list": [ 00:21:18.466 { 00:21:18.466 "name": "spare", 00:21:18.466 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:18.466 "is_configured": true, 00:21:18.466 "data_offset": 2048, 00:21:18.466 "data_size": 63488 00:21:18.466 }, 00:21:18.466 { 00:21:18.466 "name": "BaseBdev2", 00:21:18.466 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:18.466 "is_configured": true, 00:21:18.466 "data_offset": 2048, 00:21:18.466 "data_size": 63488 00:21:18.466 }, 00:21:18.466 { 00:21:18.466 "name": "BaseBdev3", 00:21:18.466 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:18.466 "is_configured": true, 00:21:18.466 "data_offset": 2048, 00:21:18.466 "data_size": 63488 00:21:18.466 }, 00:21:18.466 { 00:21:18.466 "name": "BaseBdev4", 00:21:18.466 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:18.466 "is_configured": true, 00:21:18.466 "data_offset": 2048, 00:21:18.466 "data_size": 63488 00:21:18.466 } 00:21:18.466 ] 00:21:18.466 }' 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.466 19:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.399 "name": "raid_bdev1", 00:21:19.399 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:19.399 "strip_size_kb": 64, 00:21:19.399 "state": "online", 00:21:19.399 "raid_level": "raid5f", 00:21:19.399 "superblock": true, 00:21:19.399 "num_base_bdevs": 4, 00:21:19.399 "num_base_bdevs_discovered": 4, 00:21:19.399 "num_base_bdevs_operational": 4, 00:21:19.399 "process": { 00:21:19.399 "type": "rebuild", 00:21:19.399 "target": "spare", 00:21:19.399 "progress": { 00:21:19.399 "blocks": 176640, 00:21:19.399 "percent": 92 00:21:19.399 } 00:21:19.399 }, 00:21:19.399 "base_bdevs_list": [ 00:21:19.399 { 00:21:19.399 "name": "spare", 00:21:19.399 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:19.399 "is_configured": true, 00:21:19.399 "data_offset": 2048, 00:21:19.399 "data_size": 63488 00:21:19.399 }, 00:21:19.399 { 00:21:19.399 "name": "BaseBdev2", 00:21:19.399 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:19.399 "is_configured": true, 00:21:19.399 "data_offset": 2048, 00:21:19.399 "data_size": 63488 00:21:19.399 }, 00:21:19.399 { 00:21:19.399 "name": "BaseBdev3", 00:21:19.399 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:19.399 "is_configured": true, 00:21:19.399 "data_offset": 2048, 00:21:19.399 "data_size": 63488 00:21:19.399 }, 00:21:19.399 { 00:21:19.399 "name": "BaseBdev4", 00:21:19.399 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:19.399 "is_configured": true, 00:21:19.399 "data_offset": 2048, 00:21:19.399 "data_size": 63488 00:21:19.399 } 00:21:19.399 ] 00:21:19.399 }' 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.399 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.657 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.657 19:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:20.220 [2024-12-05 19:41:13.420385] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:20.220 [2024-12-05 19:41:13.420523] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:20.220 [2024-12-05 19:41:13.420842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.478 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.736 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.736 "name": "raid_bdev1", 00:21:20.736 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:20.736 "strip_size_kb": 64, 00:21:20.736 "state": "online", 00:21:20.736 "raid_level": "raid5f", 00:21:20.736 "superblock": true, 00:21:20.736 "num_base_bdevs": 4, 00:21:20.736 "num_base_bdevs_discovered": 4, 00:21:20.736 "num_base_bdevs_operational": 4, 00:21:20.736 "base_bdevs_list": [ 00:21:20.736 { 00:21:20.736 "name": "spare", 00:21:20.736 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:20.736 "is_configured": true, 00:21:20.736 "data_offset": 2048, 00:21:20.736 "data_size": 63488 00:21:20.736 }, 00:21:20.736 { 00:21:20.736 "name": "BaseBdev2", 00:21:20.736 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:20.736 "is_configured": true, 00:21:20.736 "data_offset": 2048, 00:21:20.736 "data_size": 63488 00:21:20.736 }, 00:21:20.736 { 00:21:20.736 "name": "BaseBdev3", 00:21:20.736 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:20.736 "is_configured": true, 00:21:20.736 "data_offset": 2048, 00:21:20.736 "data_size": 63488 00:21:20.736 }, 00:21:20.736 { 00:21:20.736 "name": "BaseBdev4", 00:21:20.736 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:20.736 "is_configured": true, 00:21:20.736 "data_offset": 2048, 00:21:20.736 "data_size": 63488 00:21:20.736 } 00:21:20.736 ] 00:21:20.736 }' 00:21:20.736 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.736 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:20.736 19:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.736 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:20.736 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:20.736 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.737 "name": "raid_bdev1", 00:21:20.737 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:20.737 "strip_size_kb": 64, 00:21:20.737 "state": "online", 00:21:20.737 "raid_level": "raid5f", 00:21:20.737 "superblock": true, 00:21:20.737 "num_base_bdevs": 4, 00:21:20.737 "num_base_bdevs_discovered": 4, 00:21:20.737 "num_base_bdevs_operational": 4, 00:21:20.737 "base_bdevs_list": [ 00:21:20.737 { 00:21:20.737 "name": "spare", 00:21:20.737 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:20.737 "is_configured": true, 00:21:20.737 "data_offset": 2048, 00:21:20.737 "data_size": 63488 00:21:20.737 }, 00:21:20.737 { 00:21:20.737 "name": "BaseBdev2", 00:21:20.737 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:20.737 "is_configured": true, 00:21:20.737 "data_offset": 2048, 00:21:20.737 "data_size": 63488 00:21:20.737 }, 00:21:20.737 { 00:21:20.737 "name": "BaseBdev3", 00:21:20.737 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:20.737 "is_configured": true, 00:21:20.737 "data_offset": 2048, 00:21:20.737 "data_size": 63488 00:21:20.737 }, 00:21:20.737 { 00:21:20.737 "name": "BaseBdev4", 00:21:20.737 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:20.737 "is_configured": true, 00:21:20.737 "data_offset": 2048, 00:21:20.737 "data_size": 63488 00:21:20.737 } 00:21:20.737 ] 00:21:20.737 }' 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:20.737 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.995 "name": "raid_bdev1", 00:21:20.995 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:20.995 "strip_size_kb": 64, 00:21:20.995 "state": "online", 00:21:20.995 "raid_level": "raid5f", 00:21:20.995 "superblock": true, 00:21:20.995 "num_base_bdevs": 4, 00:21:20.995 "num_base_bdevs_discovered": 4, 00:21:20.995 "num_base_bdevs_operational": 4, 00:21:20.995 "base_bdevs_list": [ 00:21:20.995 { 00:21:20.995 "name": "spare", 00:21:20.995 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:20.995 "is_configured": true, 00:21:20.995 "data_offset": 2048, 00:21:20.995 "data_size": 63488 00:21:20.995 }, 00:21:20.995 { 00:21:20.995 "name": "BaseBdev2", 00:21:20.995 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:20.995 "is_configured": true, 00:21:20.995 "data_offset": 2048, 00:21:20.995 "data_size": 63488 00:21:20.995 }, 00:21:20.995 { 00:21:20.995 "name": "BaseBdev3", 00:21:20.995 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:20.995 "is_configured": true, 00:21:20.995 "data_offset": 2048, 00:21:20.995 "data_size": 63488 00:21:20.995 }, 00:21:20.995 { 00:21:20.995 "name": "BaseBdev4", 00:21:20.995 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:20.995 "is_configured": true, 00:21:20.995 "data_offset": 2048, 00:21:20.995 "data_size": 63488 00:21:20.995 } 00:21:20.995 ] 00:21:20.995 }' 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.995 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.562 [2024-12-05 19:41:14.741046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.562 [2024-12-05 19:41:14.741271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.562 [2024-12-05 19:41:14.741503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.562 [2024-12-05 19:41:14.741679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.562 [2024-12-05 19:41:14.741741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:21.562 19:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:21.819 /dev/nbd0 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.819 1+0 records in 00:21:21.819 1+0 records out 00:21:21.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260277 s, 15.7 MB/s 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:21.819 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:22.076 /dev/nbd1 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.076 1+0 records in 00:21:22.076 1+0 records out 00:21:22.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416567 s, 9.8 MB/s 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:22.076 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.334 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.592 19:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.850 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.851 [2024-12-05 19:41:16.288804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.851 [2024-12-05 19:41:16.288867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.851 [2024-12-05 19:41:16.288915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:22.851 [2024-12-05 19:41:16.288936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.109 [2024-12-05 19:41:16.292092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.109 [2024-12-05 19:41:16.292155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:23.109 [2024-12-05 19:41:16.292285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:23.109 [2024-12-05 19:41:16.292354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.109 [2024-12-05 19:41:16.292566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.109 [2024-12-05 19:41:16.292727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:23.109 [2024-12-05 19:41:16.292853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:23.109 spare 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.109 [2024-12-05 19:41:16.393045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:23.109 [2024-12-05 19:41:16.393123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:23.109 [2024-12-05 19:41:16.393538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:21:23.109 [2024-12-05 19:41:16.400164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:23.109 [2024-12-05 19:41:16.400214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:23.109 [2024-12-05 19:41:16.400449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.109 "name": "raid_bdev1", 00:21:23.109 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:23.109 "strip_size_kb": 64, 00:21:23.109 "state": "online", 00:21:23.109 "raid_level": "raid5f", 00:21:23.109 "superblock": true, 00:21:23.109 "num_base_bdevs": 4, 00:21:23.109 "num_base_bdevs_discovered": 4, 00:21:23.109 "num_base_bdevs_operational": 4, 00:21:23.109 "base_bdevs_list": [ 00:21:23.109 { 00:21:23.109 "name": "spare", 00:21:23.109 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:23.109 "is_configured": true, 00:21:23.109 "data_offset": 2048, 00:21:23.109 "data_size": 63488 00:21:23.109 }, 00:21:23.109 { 00:21:23.109 "name": "BaseBdev2", 00:21:23.109 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:23.109 "is_configured": true, 00:21:23.109 "data_offset": 2048, 00:21:23.109 "data_size": 63488 00:21:23.109 }, 00:21:23.109 { 00:21:23.109 "name": "BaseBdev3", 00:21:23.109 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:23.109 "is_configured": true, 00:21:23.109 "data_offset": 2048, 00:21:23.109 "data_size": 63488 00:21:23.109 }, 00:21:23.109 { 00:21:23.109 "name": "BaseBdev4", 00:21:23.109 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:23.109 "is_configured": true, 00:21:23.109 "data_offset": 2048, 00:21:23.109 "data_size": 63488 00:21:23.109 } 00:21:23.109 ] 00:21:23.109 }' 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.109 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.676 "name": "raid_bdev1", 00:21:23.676 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:23.676 "strip_size_kb": 64, 00:21:23.676 "state": "online", 00:21:23.676 "raid_level": "raid5f", 00:21:23.676 "superblock": true, 00:21:23.676 "num_base_bdevs": 4, 00:21:23.676 "num_base_bdevs_discovered": 4, 00:21:23.676 "num_base_bdevs_operational": 4, 00:21:23.676 "base_bdevs_list": [ 00:21:23.676 { 00:21:23.676 "name": "spare", 00:21:23.676 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:23.676 "is_configured": true, 00:21:23.676 "data_offset": 2048, 00:21:23.676 "data_size": 63488 00:21:23.676 }, 00:21:23.676 { 00:21:23.676 "name": "BaseBdev2", 00:21:23.676 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:23.676 "is_configured": true, 00:21:23.676 "data_offset": 2048, 00:21:23.676 "data_size": 63488 00:21:23.676 }, 00:21:23.676 { 00:21:23.676 "name": "BaseBdev3", 00:21:23.676 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:23.676 "is_configured": true, 00:21:23.676 "data_offset": 2048, 00:21:23.676 "data_size": 63488 00:21:23.676 }, 00:21:23.676 { 00:21:23.676 "name": "BaseBdev4", 00:21:23.676 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:23.676 "is_configured": true, 00:21:23.676 "data_offset": 2048, 00:21:23.676 "data_size": 63488 00:21:23.676 } 00:21:23.676 ] 00:21:23.676 }' 00:21:23.676 19:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.676 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.936 [2024-12-05 19:41:17.128030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.936 "name": "raid_bdev1", 00:21:23.936 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:23.936 "strip_size_kb": 64, 00:21:23.936 "state": "online", 00:21:23.936 "raid_level": "raid5f", 00:21:23.936 "superblock": true, 00:21:23.936 "num_base_bdevs": 4, 00:21:23.936 "num_base_bdevs_discovered": 3, 00:21:23.936 "num_base_bdevs_operational": 3, 00:21:23.936 "base_bdevs_list": [ 00:21:23.936 { 00:21:23.936 "name": null, 00:21:23.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.936 "is_configured": false, 00:21:23.936 "data_offset": 0, 00:21:23.936 "data_size": 63488 00:21:23.936 }, 00:21:23.936 { 00:21:23.936 "name": "BaseBdev2", 00:21:23.936 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:23.936 "is_configured": true, 00:21:23.936 "data_offset": 2048, 00:21:23.936 "data_size": 63488 00:21:23.936 }, 00:21:23.936 { 00:21:23.936 "name": "BaseBdev3", 00:21:23.936 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:23.936 "is_configured": true, 00:21:23.936 "data_offset": 2048, 00:21:23.936 "data_size": 63488 00:21:23.936 }, 00:21:23.936 { 00:21:23.936 "name": "BaseBdev4", 00:21:23.936 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:23.936 "is_configured": true, 00:21:23.936 "data_offset": 2048, 00:21:23.936 "data_size": 63488 00:21:23.936 } 00:21:23.936 ] 00:21:23.936 }' 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.936 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.503 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.503 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.503 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.503 [2024-12-05 19:41:17.652222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.503 [2024-12-05 19:41:17.652467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:24.503 [2024-12-05 19:41:17.652497] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:24.503 [2024-12-05 19:41:17.652550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.503 [2024-12-05 19:41:17.666473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:21:24.503 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.503 19:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:24.503 [2024-12-05 19:41:17.675550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.512 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.512 "name": "raid_bdev1", 00:21:25.512 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:25.512 "strip_size_kb": 64, 00:21:25.512 "state": "online", 00:21:25.512 "raid_level": "raid5f", 00:21:25.512 "superblock": true, 00:21:25.512 "num_base_bdevs": 4, 00:21:25.512 "num_base_bdevs_discovered": 4, 00:21:25.512 "num_base_bdevs_operational": 4, 00:21:25.512 "process": { 00:21:25.512 "type": "rebuild", 00:21:25.512 "target": "spare", 00:21:25.512 "progress": { 00:21:25.512 "blocks": 17280, 00:21:25.512 "percent": 9 00:21:25.512 } 00:21:25.512 }, 00:21:25.512 "base_bdevs_list": [ 00:21:25.513 { 00:21:25.513 "name": "spare", 00:21:25.513 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:25.513 "is_configured": true, 00:21:25.513 "data_offset": 2048, 00:21:25.513 "data_size": 63488 00:21:25.513 }, 00:21:25.513 { 00:21:25.513 "name": "BaseBdev2", 00:21:25.513 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:25.513 "is_configured": true, 00:21:25.513 "data_offset": 2048, 00:21:25.513 "data_size": 63488 00:21:25.513 }, 00:21:25.513 { 00:21:25.513 "name": "BaseBdev3", 00:21:25.513 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:25.513 "is_configured": true, 00:21:25.513 "data_offset": 2048, 00:21:25.513 "data_size": 63488 00:21:25.513 }, 00:21:25.513 { 00:21:25.513 "name": "BaseBdev4", 00:21:25.513 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:25.513 "is_configured": true, 00:21:25.513 "data_offset": 2048, 00:21:25.513 "data_size": 63488 00:21:25.513 } 00:21:25.513 ] 00:21:25.513 }' 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.513 [2024-12-05 19:41:18.833102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.513 [2024-12-05 19:41:18.888169] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:25.513 [2024-12-05 19:41:18.888274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.513 [2024-12-05 19:41:18.888301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.513 [2024-12-05 19:41:18.888318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.513 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.805 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.805 "name": "raid_bdev1", 00:21:25.805 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:25.805 "strip_size_kb": 64, 00:21:25.805 "state": "online", 00:21:25.805 "raid_level": "raid5f", 00:21:25.805 "superblock": true, 00:21:25.805 "num_base_bdevs": 4, 00:21:25.805 "num_base_bdevs_discovered": 3, 00:21:25.805 "num_base_bdevs_operational": 3, 00:21:25.805 "base_bdevs_list": [ 00:21:25.805 { 00:21:25.805 "name": null, 00:21:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.805 "is_configured": false, 00:21:25.805 "data_offset": 0, 00:21:25.805 "data_size": 63488 00:21:25.805 }, 00:21:25.805 { 00:21:25.805 "name": "BaseBdev2", 00:21:25.805 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:25.805 "is_configured": true, 00:21:25.805 "data_offset": 2048, 00:21:25.805 "data_size": 63488 00:21:25.805 }, 00:21:25.805 { 00:21:25.805 "name": "BaseBdev3", 00:21:25.805 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:25.805 "is_configured": true, 00:21:25.805 "data_offset": 2048, 00:21:25.805 "data_size": 63488 00:21:25.805 }, 00:21:25.805 { 00:21:25.805 "name": "BaseBdev4", 00:21:25.805 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:25.805 "is_configured": true, 00:21:25.805 "data_offset": 2048, 00:21:25.805 "data_size": 63488 00:21:25.805 } 00:21:25.805 ] 00:21:25.805 }' 00:21:25.805 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.805 19:41:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.064 19:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:26.064 19:41:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.064 19:41:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.064 [2024-12-05 19:41:19.432346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.064 [2024-12-05 19:41:19.432428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.064 [2024-12-05 19:41:19.432471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:26.064 [2024-12-05 19:41:19.432491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.064 [2024-12-05 19:41:19.433134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.064 [2024-12-05 19:41:19.433194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.064 [2024-12-05 19:41:19.433310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:26.064 [2024-12-05 19:41:19.433334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:26.064 [2024-12-05 19:41:19.433347] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:26.064 [2024-12-05 19:41:19.433382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.064 [2024-12-05 19:41:19.446521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:21:26.064 spare 00:21:26.064 19:41:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.064 19:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:26.064 [2024-12-05 19:41:19.454993] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.443 "name": "raid_bdev1", 00:21:27.443 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:27.443 "strip_size_kb": 64, 00:21:27.443 "state": "online", 00:21:27.443 "raid_level": "raid5f", 00:21:27.443 "superblock": true, 00:21:27.443 "num_base_bdevs": 4, 00:21:27.443 "num_base_bdevs_discovered": 4, 00:21:27.443 "num_base_bdevs_operational": 4, 00:21:27.443 "process": { 00:21:27.443 "type": "rebuild", 00:21:27.443 "target": "spare", 00:21:27.443 "progress": { 00:21:27.443 "blocks": 17280, 00:21:27.443 "percent": 9 00:21:27.443 } 00:21:27.443 }, 00:21:27.443 "base_bdevs_list": [ 00:21:27.443 { 00:21:27.443 "name": "spare", 00:21:27.443 "uuid": "7db48fb8-8dd5-54bd-986d-77a415a579c5", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev2", 00:21:27.443 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev3", 00:21:27.443 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev4", 00:21:27.443 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 } 00:21:27.443 ] 00:21:27.443 }' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.443 [2024-12-05 19:41:20.620519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.443 [2024-12-05 19:41:20.666465] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.443 [2024-12-05 19:41:20.666563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.443 [2024-12-05 19:41:20.666591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.443 [2024-12-05 19:41:20.666602] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.443 "name": "raid_bdev1", 00:21:27.443 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:27.443 "strip_size_kb": 64, 00:21:27.443 "state": "online", 00:21:27.443 "raid_level": "raid5f", 00:21:27.443 "superblock": true, 00:21:27.443 "num_base_bdevs": 4, 00:21:27.443 "num_base_bdevs_discovered": 3, 00:21:27.443 "num_base_bdevs_operational": 3, 00:21:27.443 "base_bdevs_list": [ 00:21:27.443 { 00:21:27.443 "name": null, 00:21:27.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.443 "is_configured": false, 00:21:27.443 "data_offset": 0, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev2", 00:21:27.443 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev3", 00:21:27.443 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 }, 00:21:27.443 { 00:21:27.443 "name": "BaseBdev4", 00:21:27.443 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:27.443 "is_configured": true, 00:21:27.443 "data_offset": 2048, 00:21:27.443 "data_size": 63488 00:21:27.443 } 00:21:27.443 ] 00:21:27.443 }' 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.443 19:41:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.012 "name": "raid_bdev1", 00:21:28.012 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:28.012 "strip_size_kb": 64, 00:21:28.012 "state": "online", 00:21:28.012 "raid_level": "raid5f", 00:21:28.012 "superblock": true, 00:21:28.012 "num_base_bdevs": 4, 00:21:28.012 "num_base_bdevs_discovered": 3, 00:21:28.012 "num_base_bdevs_operational": 3, 00:21:28.012 "base_bdevs_list": [ 00:21:28.012 { 00:21:28.012 "name": null, 00:21:28.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.012 "is_configured": false, 00:21:28.012 "data_offset": 0, 00:21:28.012 "data_size": 63488 00:21:28.012 }, 00:21:28.012 { 00:21:28.012 "name": "BaseBdev2", 00:21:28.012 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:28.012 "is_configured": true, 00:21:28.012 "data_offset": 2048, 00:21:28.012 "data_size": 63488 00:21:28.012 }, 00:21:28.012 { 00:21:28.012 "name": "BaseBdev3", 00:21:28.012 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:28.012 "is_configured": true, 00:21:28.012 "data_offset": 2048, 00:21:28.012 "data_size": 63488 00:21:28.012 }, 00:21:28.012 { 00:21:28.012 "name": "BaseBdev4", 00:21:28.012 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:28.012 "is_configured": true, 00:21:28.012 "data_offset": 2048, 00:21:28.012 "data_size": 63488 00:21:28.012 } 00:21:28.012 ] 00:21:28.012 }' 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.012 [2024-12-05 19:41:21.369604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.012 [2024-12-05 19:41:21.369710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.012 [2024-12-05 19:41:21.369777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:28.012 [2024-12-05 19:41:21.369793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.012 [2024-12-05 19:41:21.370377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.012 [2024-12-05 19:41:21.370420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.012 [2024-12-05 19:41:21.370532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:28.012 [2024-12-05 19:41:21.370553] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:28.012 [2024-12-05 19:41:21.370569] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:28.012 [2024-12-05 19:41:21.370583] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:28.012 BaseBdev1 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.012 19:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.948 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.206 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.206 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.206 "name": "raid_bdev1", 00:21:29.206 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:29.206 "strip_size_kb": 64, 00:21:29.206 "state": "online", 00:21:29.206 "raid_level": "raid5f", 00:21:29.206 "superblock": true, 00:21:29.206 "num_base_bdevs": 4, 00:21:29.206 "num_base_bdevs_discovered": 3, 00:21:29.206 "num_base_bdevs_operational": 3, 00:21:29.206 "base_bdevs_list": [ 00:21:29.206 { 00:21:29.206 "name": null, 00:21:29.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.206 "is_configured": false, 00:21:29.206 "data_offset": 0, 00:21:29.206 "data_size": 63488 00:21:29.206 }, 00:21:29.206 { 00:21:29.206 "name": "BaseBdev2", 00:21:29.206 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:29.206 "is_configured": true, 00:21:29.206 "data_offset": 2048, 00:21:29.206 "data_size": 63488 00:21:29.206 }, 00:21:29.206 { 00:21:29.206 "name": "BaseBdev3", 00:21:29.206 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:29.206 "is_configured": true, 00:21:29.206 "data_offset": 2048, 00:21:29.206 "data_size": 63488 00:21:29.206 }, 00:21:29.206 { 00:21:29.206 "name": "BaseBdev4", 00:21:29.206 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:29.206 "is_configured": true, 00:21:29.206 "data_offset": 2048, 00:21:29.206 "data_size": 63488 00:21:29.206 } 00:21:29.206 ] 00:21:29.206 }' 00:21:29.206 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.206 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.465 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.724 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.724 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.724 "name": "raid_bdev1", 00:21:29.724 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:29.724 "strip_size_kb": 64, 00:21:29.724 "state": "online", 00:21:29.724 "raid_level": "raid5f", 00:21:29.724 "superblock": true, 00:21:29.724 "num_base_bdevs": 4, 00:21:29.724 "num_base_bdevs_discovered": 3, 00:21:29.724 "num_base_bdevs_operational": 3, 00:21:29.724 "base_bdevs_list": [ 00:21:29.724 { 00:21:29.724 "name": null, 00:21:29.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.724 "is_configured": false, 00:21:29.724 "data_offset": 0, 00:21:29.724 "data_size": 63488 00:21:29.724 }, 00:21:29.724 { 00:21:29.724 "name": "BaseBdev2", 00:21:29.724 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:29.724 "is_configured": true, 00:21:29.724 "data_offset": 2048, 00:21:29.724 "data_size": 63488 00:21:29.724 }, 00:21:29.724 { 00:21:29.724 "name": "BaseBdev3", 00:21:29.724 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:29.724 "is_configured": true, 00:21:29.724 "data_offset": 2048, 00:21:29.724 "data_size": 63488 00:21:29.724 }, 00:21:29.724 { 00:21:29.724 "name": "BaseBdev4", 00:21:29.724 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:29.724 "is_configured": true, 00:21:29.724 "data_offset": 2048, 00:21:29.724 "data_size": 63488 00:21:29.724 } 00:21:29.724 ] 00:21:29.724 }' 00:21:29.724 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.724 19:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.724 [2024-12-05 19:41:23.062442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.724 [2024-12-05 19:41:23.062662] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.724 [2024-12-05 19:41:23.062685] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:29.724 request: 00:21:29.724 { 00:21:29.724 "base_bdev": "BaseBdev1", 00:21:29.724 "raid_bdev": "raid_bdev1", 00:21:29.724 "method": "bdev_raid_add_base_bdev", 00:21:29.724 "req_id": 1 00:21:29.724 } 00:21:29.724 Got JSON-RPC error response 00:21:29.724 response: 00:21:29.724 { 00:21:29.724 "code": -22, 00:21:29.724 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:29.724 } 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.724 19:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.661 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.920 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.920 "name": "raid_bdev1", 00:21:30.920 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:30.920 "strip_size_kb": 64, 00:21:30.920 "state": "online", 00:21:30.920 "raid_level": "raid5f", 00:21:30.920 "superblock": true, 00:21:30.920 "num_base_bdevs": 4, 00:21:30.920 "num_base_bdevs_discovered": 3, 00:21:30.920 "num_base_bdevs_operational": 3, 00:21:30.920 "base_bdevs_list": [ 00:21:30.920 { 00:21:30.920 "name": null, 00:21:30.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.920 "is_configured": false, 00:21:30.920 "data_offset": 0, 00:21:30.920 "data_size": 63488 00:21:30.920 }, 00:21:30.920 { 00:21:30.920 "name": "BaseBdev2", 00:21:30.920 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:30.920 "is_configured": true, 00:21:30.920 "data_offset": 2048, 00:21:30.920 "data_size": 63488 00:21:30.920 }, 00:21:30.920 { 00:21:30.920 "name": "BaseBdev3", 00:21:30.920 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:30.920 "is_configured": true, 00:21:30.920 "data_offset": 2048, 00:21:30.920 "data_size": 63488 00:21:30.920 }, 00:21:30.920 { 00:21:30.920 "name": "BaseBdev4", 00:21:30.920 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:30.920 "is_configured": true, 00:21:30.920 "data_offset": 2048, 00:21:30.920 "data_size": 63488 00:21:30.920 } 00:21:30.920 ] 00:21:30.920 }' 00:21:30.921 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.921 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.489 "name": "raid_bdev1", 00:21:31.489 "uuid": "26dd12c7-5cdd-4197-839c-76276e1c58d4", 00:21:31.489 "strip_size_kb": 64, 00:21:31.489 "state": "online", 00:21:31.489 "raid_level": "raid5f", 00:21:31.489 "superblock": true, 00:21:31.489 "num_base_bdevs": 4, 00:21:31.489 "num_base_bdevs_discovered": 3, 00:21:31.489 "num_base_bdevs_operational": 3, 00:21:31.489 "base_bdevs_list": [ 00:21:31.489 { 00:21:31.489 "name": null, 00:21:31.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.489 "is_configured": false, 00:21:31.489 "data_offset": 0, 00:21:31.489 "data_size": 63488 00:21:31.489 }, 00:21:31.489 { 00:21:31.489 "name": "BaseBdev2", 00:21:31.489 "uuid": "dac29a86-40e3-5984-94ce-8854b1a44db9", 00:21:31.489 "is_configured": true, 00:21:31.489 "data_offset": 2048, 00:21:31.489 "data_size": 63488 00:21:31.489 }, 00:21:31.489 { 00:21:31.489 "name": "BaseBdev3", 00:21:31.489 "uuid": "1297434a-5972-5bf9-943a-a61d205e3f91", 00:21:31.489 "is_configured": true, 00:21:31.489 "data_offset": 2048, 00:21:31.489 "data_size": 63488 00:21:31.489 }, 00:21:31.489 { 00:21:31.489 "name": "BaseBdev4", 00:21:31.489 "uuid": "5efe92fd-9b4b-5a8a-b752-c679aafa43e3", 00:21:31.489 "is_configured": true, 00:21:31.489 "data_offset": 2048, 00:21:31.489 "data_size": 63488 00:21:31.489 } 00:21:31.489 ] 00:21:31.489 }' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85547 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85547 ']' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85547 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85547 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.489 killing process with pid 85547 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85547' 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85547 00:21:31.489 Received shutdown signal, test time was about 60.000000 seconds 00:21:31.489 00:21:31.489 Latency(us) 00:21:31.489 [2024-12-05T19:41:24.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.489 [2024-12-05T19:41:24.930Z] =================================================================================================================== 00:21:31.489 [2024-12-05T19:41:24.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.489 [2024-12-05 19:41:24.801372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.489 19:41:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85547 00:21:31.489 [2024-12-05 19:41:24.801540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.489 [2024-12-05 19:41:24.801644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.489 [2024-12-05 19:41:24.801675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:32.058 [2024-12-05 19:41:25.229250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:32.994 19:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:32.994 00:21:32.994 real 0m28.799s 00:21:32.994 user 0m37.553s 00:21:32.994 sys 0m2.995s 00:21:32.994 19:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.994 19:41:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.994 ************************************ 00:21:32.994 END TEST raid5f_rebuild_test_sb 00:21:32.994 ************************************ 00:21:32.994 19:41:26 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:21:32.994 19:41:26 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:32.994 19:41:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:32.994 19:41:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.994 19:41:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.994 ************************************ 00:21:32.994 START TEST raid_state_function_test_sb_4k 00:21:32.994 ************************************ 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:32.994 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86369 00:21:32.995 Process raid pid: 86369 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86369' 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86369 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86369 ']' 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.995 19:41:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:33.254 [2024-12-05 19:41:26.440909] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:33.254 [2024-12-05 19:41:26.441100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.254 [2024-12-05 19:41:26.647067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.513 [2024-12-05 19:41:26.810081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.771 [2024-12-05 19:41:27.025280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.771 [2024-12-05 19:41:27.025319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.339 [2024-12-05 19:41:27.488183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.339 [2024-12-05 19:41:27.488329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.339 [2024-12-05 19:41:27.488346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.339 [2024-12-05 19:41:27.488362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.339 "name": "Existed_Raid", 00:21:34.339 "uuid": "36c6727b-a130-45eb-adfe-cb222b7d74b7", 00:21:34.339 "strip_size_kb": 0, 00:21:34.339 "state": "configuring", 00:21:34.339 "raid_level": "raid1", 00:21:34.339 "superblock": true, 00:21:34.339 "num_base_bdevs": 2, 00:21:34.339 "num_base_bdevs_discovered": 0, 00:21:34.339 "num_base_bdevs_operational": 2, 00:21:34.339 "base_bdevs_list": [ 00:21:34.339 { 00:21:34.339 "name": "BaseBdev1", 00:21:34.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.339 "is_configured": false, 00:21:34.339 "data_offset": 0, 00:21:34.339 "data_size": 0 00:21:34.339 }, 00:21:34.339 { 00:21:34.339 "name": "BaseBdev2", 00:21:34.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.339 "is_configured": false, 00:21:34.339 "data_offset": 0, 00:21:34.339 "data_size": 0 00:21:34.339 } 00:21:34.339 ] 00:21:34.339 }' 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.339 19:41:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.599 [2024-12-05 19:41:28.016251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:34.599 [2024-12-05 19:41:28.016329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.599 [2024-12-05 19:41:28.024241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.599 [2024-12-05 19:41:28.024332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.599 [2024-12-05 19:41:28.024362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.599 [2024-12-05 19:41:28.024397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.599 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.857 [2024-12-05 19:41:28.068692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.857 BaseBdev1 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.858 [ 00:21:34.858 { 00:21:34.858 "name": "BaseBdev1", 00:21:34.858 "aliases": [ 00:21:34.858 "27b8d692-cf2a-4e98-a043-ddff930b597a" 00:21:34.858 ], 00:21:34.858 "product_name": "Malloc disk", 00:21:34.858 "block_size": 4096, 00:21:34.858 "num_blocks": 8192, 00:21:34.858 "uuid": "27b8d692-cf2a-4e98-a043-ddff930b597a", 00:21:34.858 "assigned_rate_limits": { 00:21:34.858 "rw_ios_per_sec": 0, 00:21:34.858 "rw_mbytes_per_sec": 0, 00:21:34.858 "r_mbytes_per_sec": 0, 00:21:34.858 "w_mbytes_per_sec": 0 00:21:34.858 }, 00:21:34.858 "claimed": true, 00:21:34.858 "claim_type": "exclusive_write", 00:21:34.858 "zoned": false, 00:21:34.858 "supported_io_types": { 00:21:34.858 "read": true, 00:21:34.858 "write": true, 00:21:34.858 "unmap": true, 00:21:34.858 "flush": true, 00:21:34.858 "reset": true, 00:21:34.858 "nvme_admin": false, 00:21:34.858 "nvme_io": false, 00:21:34.858 "nvme_io_md": false, 00:21:34.858 "write_zeroes": true, 00:21:34.858 "zcopy": true, 00:21:34.858 "get_zone_info": false, 00:21:34.858 "zone_management": false, 00:21:34.858 "zone_append": false, 00:21:34.858 "compare": false, 00:21:34.858 "compare_and_write": false, 00:21:34.858 "abort": true, 00:21:34.858 "seek_hole": false, 00:21:34.858 "seek_data": false, 00:21:34.858 "copy": true, 00:21:34.858 "nvme_iov_md": false 00:21:34.858 }, 00:21:34.858 "memory_domains": [ 00:21:34.858 { 00:21:34.858 "dma_device_id": "system", 00:21:34.858 "dma_device_type": 1 00:21:34.858 }, 00:21:34.858 { 00:21:34.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.858 "dma_device_type": 2 00:21:34.858 } 00:21:34.858 ], 00:21:34.858 "driver_specific": {} 00:21:34.858 } 00:21:34.858 ] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.858 "name": "Existed_Raid", 00:21:34.858 "uuid": "db042946-388c-4af8-9918-32f972b9ec7d", 00:21:34.858 "strip_size_kb": 0, 00:21:34.858 "state": "configuring", 00:21:34.858 "raid_level": "raid1", 00:21:34.858 "superblock": true, 00:21:34.858 "num_base_bdevs": 2, 00:21:34.858 "num_base_bdevs_discovered": 1, 00:21:34.858 "num_base_bdevs_operational": 2, 00:21:34.858 "base_bdevs_list": [ 00:21:34.858 { 00:21:34.858 "name": "BaseBdev1", 00:21:34.858 "uuid": "27b8d692-cf2a-4e98-a043-ddff930b597a", 00:21:34.858 "is_configured": true, 00:21:34.858 "data_offset": 256, 00:21:34.858 "data_size": 7936 00:21:34.858 }, 00:21:34.858 { 00:21:34.858 "name": "BaseBdev2", 00:21:34.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.858 "is_configured": false, 00:21:34.858 "data_offset": 0, 00:21:34.858 "data_size": 0 00:21:34.858 } 00:21:34.858 ] 00:21:34.858 }' 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.858 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.426 [2024-12-05 19:41:28.624971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.426 [2024-12-05 19:41:28.625039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.426 [2024-12-05 19:41:28.633008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.426 [2024-12-05 19:41:28.635455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.426 [2024-12-05 19:41:28.635512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.426 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.426 "name": "Existed_Raid", 00:21:35.426 "uuid": "ca6f573e-5892-4f07-b3ad-4e5c0fda2080", 00:21:35.426 "strip_size_kb": 0, 00:21:35.426 "state": "configuring", 00:21:35.426 "raid_level": "raid1", 00:21:35.426 "superblock": true, 00:21:35.426 "num_base_bdevs": 2, 00:21:35.426 "num_base_bdevs_discovered": 1, 00:21:35.426 "num_base_bdevs_operational": 2, 00:21:35.426 "base_bdevs_list": [ 00:21:35.426 { 00:21:35.426 "name": "BaseBdev1", 00:21:35.426 "uuid": "27b8d692-cf2a-4e98-a043-ddff930b597a", 00:21:35.426 "is_configured": true, 00:21:35.426 "data_offset": 256, 00:21:35.426 "data_size": 7936 00:21:35.426 }, 00:21:35.426 { 00:21:35.427 "name": "BaseBdev2", 00:21:35.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.427 "is_configured": false, 00:21:35.427 "data_offset": 0, 00:21:35.427 "data_size": 0 00:21:35.427 } 00:21:35.427 ] 00:21:35.427 }' 00:21:35.427 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.427 19:41:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.994 [2024-12-05 19:41:29.189326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.994 [2024-12-05 19:41:29.189663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:35.994 [2024-12-05 19:41:29.189699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:35.994 [2024-12-05 19:41:29.190035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:35.994 BaseBdev2 00:21:35.994 [2024-12-05 19:41:29.190332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:35.994 [2024-12-05 19:41:29.190367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:35.994 [2024-12-05 19:41:29.190546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.994 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.994 [ 00:21:35.994 { 00:21:35.994 "name": "BaseBdev2", 00:21:35.994 "aliases": [ 00:21:35.994 "bfdbc1ec-ecec-4db8-85f3-ed8736f9e0d5" 00:21:35.994 ], 00:21:35.994 "product_name": "Malloc disk", 00:21:35.994 "block_size": 4096, 00:21:35.994 "num_blocks": 8192, 00:21:35.994 "uuid": "bfdbc1ec-ecec-4db8-85f3-ed8736f9e0d5", 00:21:35.994 "assigned_rate_limits": { 00:21:35.994 "rw_ios_per_sec": 0, 00:21:35.994 "rw_mbytes_per_sec": 0, 00:21:35.994 "r_mbytes_per_sec": 0, 00:21:35.995 "w_mbytes_per_sec": 0 00:21:35.995 }, 00:21:35.995 "claimed": true, 00:21:35.995 "claim_type": "exclusive_write", 00:21:35.995 "zoned": false, 00:21:35.995 "supported_io_types": { 00:21:35.995 "read": true, 00:21:35.995 "write": true, 00:21:35.995 "unmap": true, 00:21:35.995 "flush": true, 00:21:35.995 "reset": true, 00:21:35.995 "nvme_admin": false, 00:21:35.995 "nvme_io": false, 00:21:35.995 "nvme_io_md": false, 00:21:35.995 "write_zeroes": true, 00:21:35.995 "zcopy": true, 00:21:35.995 "get_zone_info": false, 00:21:35.995 "zone_management": false, 00:21:35.995 "zone_append": false, 00:21:35.995 "compare": false, 00:21:35.995 "compare_and_write": false, 00:21:35.995 "abort": true, 00:21:35.995 "seek_hole": false, 00:21:35.995 "seek_data": false, 00:21:35.995 "copy": true, 00:21:35.995 "nvme_iov_md": false 00:21:35.995 }, 00:21:35.995 "memory_domains": [ 00:21:35.995 { 00:21:35.995 "dma_device_id": "system", 00:21:35.995 "dma_device_type": 1 00:21:35.995 }, 00:21:35.995 { 00:21:35.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.995 "dma_device_type": 2 00:21:35.995 } 00:21:35.995 ], 00:21:35.995 "driver_specific": {} 00:21:35.995 } 00:21:35.995 ] 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.995 "name": "Existed_Raid", 00:21:35.995 "uuid": "ca6f573e-5892-4f07-b3ad-4e5c0fda2080", 00:21:35.995 "strip_size_kb": 0, 00:21:35.995 "state": "online", 00:21:35.995 "raid_level": "raid1", 00:21:35.995 "superblock": true, 00:21:35.995 "num_base_bdevs": 2, 00:21:35.995 "num_base_bdevs_discovered": 2, 00:21:35.995 "num_base_bdevs_operational": 2, 00:21:35.995 "base_bdevs_list": [ 00:21:35.995 { 00:21:35.995 "name": "BaseBdev1", 00:21:35.995 "uuid": "27b8d692-cf2a-4e98-a043-ddff930b597a", 00:21:35.995 "is_configured": true, 00:21:35.995 "data_offset": 256, 00:21:35.995 "data_size": 7936 00:21:35.995 }, 00:21:35.995 { 00:21:35.995 "name": "BaseBdev2", 00:21:35.995 "uuid": "bfdbc1ec-ecec-4db8-85f3-ed8736f9e0d5", 00:21:35.995 "is_configured": true, 00:21:35.995 "data_offset": 256, 00:21:35.995 "data_size": 7936 00:21:35.995 } 00:21:35.995 ] 00:21:35.995 }' 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.995 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.562 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:36.562 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:36.562 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:36.562 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:36.563 [2024-12-05 19:41:29.749913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.563 "name": "Existed_Raid", 00:21:36.563 "aliases": [ 00:21:36.563 "ca6f573e-5892-4f07-b3ad-4e5c0fda2080" 00:21:36.563 ], 00:21:36.563 "product_name": "Raid Volume", 00:21:36.563 "block_size": 4096, 00:21:36.563 "num_blocks": 7936, 00:21:36.563 "uuid": "ca6f573e-5892-4f07-b3ad-4e5c0fda2080", 00:21:36.563 "assigned_rate_limits": { 00:21:36.563 "rw_ios_per_sec": 0, 00:21:36.563 "rw_mbytes_per_sec": 0, 00:21:36.563 "r_mbytes_per_sec": 0, 00:21:36.563 "w_mbytes_per_sec": 0 00:21:36.563 }, 00:21:36.563 "claimed": false, 00:21:36.563 "zoned": false, 00:21:36.563 "supported_io_types": { 00:21:36.563 "read": true, 00:21:36.563 "write": true, 00:21:36.563 "unmap": false, 00:21:36.563 "flush": false, 00:21:36.563 "reset": true, 00:21:36.563 "nvme_admin": false, 00:21:36.563 "nvme_io": false, 00:21:36.563 "nvme_io_md": false, 00:21:36.563 "write_zeroes": true, 00:21:36.563 "zcopy": false, 00:21:36.563 "get_zone_info": false, 00:21:36.563 "zone_management": false, 00:21:36.563 "zone_append": false, 00:21:36.563 "compare": false, 00:21:36.563 "compare_and_write": false, 00:21:36.563 "abort": false, 00:21:36.563 "seek_hole": false, 00:21:36.563 "seek_data": false, 00:21:36.563 "copy": false, 00:21:36.563 "nvme_iov_md": false 00:21:36.563 }, 00:21:36.563 "memory_domains": [ 00:21:36.563 { 00:21:36.563 "dma_device_id": "system", 00:21:36.563 "dma_device_type": 1 00:21:36.563 }, 00:21:36.563 { 00:21:36.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.563 "dma_device_type": 2 00:21:36.563 }, 00:21:36.563 { 00:21:36.563 "dma_device_id": "system", 00:21:36.563 "dma_device_type": 1 00:21:36.563 }, 00:21:36.563 { 00:21:36.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.563 "dma_device_type": 2 00:21:36.563 } 00:21:36.563 ], 00:21:36.563 "driver_specific": { 00:21:36.563 "raid": { 00:21:36.563 "uuid": "ca6f573e-5892-4f07-b3ad-4e5c0fda2080", 00:21:36.563 "strip_size_kb": 0, 00:21:36.563 "state": "online", 00:21:36.563 "raid_level": "raid1", 00:21:36.563 "superblock": true, 00:21:36.563 "num_base_bdevs": 2, 00:21:36.563 "num_base_bdevs_discovered": 2, 00:21:36.563 "num_base_bdevs_operational": 2, 00:21:36.563 "base_bdevs_list": [ 00:21:36.563 { 00:21:36.563 "name": "BaseBdev1", 00:21:36.563 "uuid": "27b8d692-cf2a-4e98-a043-ddff930b597a", 00:21:36.563 "is_configured": true, 00:21:36.563 "data_offset": 256, 00:21:36.563 "data_size": 7936 00:21:36.563 }, 00:21:36.563 { 00:21:36.563 "name": "BaseBdev2", 00:21:36.563 "uuid": "bfdbc1ec-ecec-4db8-85f3-ed8736f9e0d5", 00:21:36.563 "is_configured": true, 00:21:36.563 "data_offset": 256, 00:21:36.563 "data_size": 7936 00:21:36.563 } 00:21:36.563 ] 00:21:36.563 } 00:21:36.563 } 00:21:36.563 }' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:36.563 BaseBdev2' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.563 19:41:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.821 [2024-12-05 19:41:30.013618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.821 "name": "Existed_Raid", 00:21:36.821 "uuid": "ca6f573e-5892-4f07-b3ad-4e5c0fda2080", 00:21:36.821 "strip_size_kb": 0, 00:21:36.821 "state": "online", 00:21:36.821 "raid_level": "raid1", 00:21:36.821 "superblock": true, 00:21:36.821 "num_base_bdevs": 2, 00:21:36.821 "num_base_bdevs_discovered": 1, 00:21:36.821 "num_base_bdevs_operational": 1, 00:21:36.821 "base_bdevs_list": [ 00:21:36.821 { 00:21:36.821 "name": null, 00:21:36.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.821 "is_configured": false, 00:21:36.821 "data_offset": 0, 00:21:36.821 "data_size": 7936 00:21:36.821 }, 00:21:36.821 { 00:21:36.821 "name": "BaseBdev2", 00:21:36.821 "uuid": "bfdbc1ec-ecec-4db8-85f3-ed8736f9e0d5", 00:21:36.821 "is_configured": true, 00:21:36.821 "data_offset": 256, 00:21:36.821 "data_size": 7936 00:21:36.821 } 00:21:36.821 ] 00:21:36.821 }' 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.821 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.402 [2024-12-05 19:41:30.682278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:37.402 [2024-12-05 19:41:30.682434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.402 [2024-12-05 19:41:30.772207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.402 [2024-12-05 19:41:30.772286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.402 [2024-12-05 19:41:30.772309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86369 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86369 ']' 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86369 00:21:37.402 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86369 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.678 killing process with pid 86369 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86369' 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86369 00:21:37.678 [2024-12-05 19:41:30.863909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.678 19:41:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86369 00:21:37.678 [2024-12-05 19:41:30.878854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.613 19:41:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:21:38.613 00:21:38.613 real 0m5.614s 00:21:38.613 user 0m8.464s 00:21:38.613 sys 0m0.857s 00:21:38.613 19:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.613 19:41:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.613 ************************************ 00:21:38.613 END TEST raid_state_function_test_sb_4k 00:21:38.613 ************************************ 00:21:38.613 19:41:31 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:38.613 19:41:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:38.613 19:41:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.613 19:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.613 ************************************ 00:21:38.613 START TEST raid_superblock_test_4k 00:21:38.613 ************************************ 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86621 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86621 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86621 ']' 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.613 19:41:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.872 [2024-12-05 19:41:32.105852] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:38.872 [2024-12-05 19:41:32.106027] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86621 ] 00:21:38.872 [2024-12-05 19:41:32.285043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.147 [2024-12-05 19:41:32.417470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.405 [2024-12-05 19:41:32.623443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.405 [2024-12-05 19:41:32.623508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.664 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.922 malloc1 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.922 [2024-12-05 19:41:33.123338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.922 [2024-12-05 19:41:33.123413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.922 [2024-12-05 19:41:33.123446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:39.922 [2024-12-05 19:41:33.123463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.922 [2024-12-05 19:41:33.126297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.922 [2024-12-05 19:41:33.126344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.922 pt1 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.922 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 malloc2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 [2024-12-05 19:41:33.180876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:39.923 [2024-12-05 19:41:33.180944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.923 [2024-12-05 19:41:33.180981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:39.923 [2024-12-05 19:41:33.180997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.923 [2024-12-05 19:41:33.183875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.923 [2024-12-05 19:41:33.183927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:39.923 pt2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 [2024-12-05 19:41:33.192938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:39.923 [2024-12-05 19:41:33.195346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:39.923 [2024-12-05 19:41:33.195587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:39.923 [2024-12-05 19:41:33.195610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:39.923 [2024-12-05 19:41:33.195962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:39.923 [2024-12-05 19:41:33.196192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:39.923 [2024-12-05 19:41:33.196236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:39.923 [2024-12-05 19:41:33.196430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.923 "name": "raid_bdev1", 00:21:39.923 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:39.923 "strip_size_kb": 0, 00:21:39.923 "state": "online", 00:21:39.923 "raid_level": "raid1", 00:21:39.923 "superblock": true, 00:21:39.923 "num_base_bdevs": 2, 00:21:39.923 "num_base_bdevs_discovered": 2, 00:21:39.923 "num_base_bdevs_operational": 2, 00:21:39.923 "base_bdevs_list": [ 00:21:39.923 { 00:21:39.923 "name": "pt1", 00:21:39.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.923 "is_configured": true, 00:21:39.923 "data_offset": 256, 00:21:39.923 "data_size": 7936 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "name": "pt2", 00:21:39.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.923 "is_configured": true, 00:21:39.923 "data_offset": 256, 00:21:39.923 "data_size": 7936 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }' 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.923 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.490 [2024-12-05 19:41:33.705414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.490 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.490 "name": "raid_bdev1", 00:21:40.490 "aliases": [ 00:21:40.490 "fcd4aef3-f995-4af9-8f97-66923c3ce215" 00:21:40.490 ], 00:21:40.490 "product_name": "Raid Volume", 00:21:40.490 "block_size": 4096, 00:21:40.490 "num_blocks": 7936, 00:21:40.490 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:40.490 "assigned_rate_limits": { 00:21:40.490 "rw_ios_per_sec": 0, 00:21:40.490 "rw_mbytes_per_sec": 0, 00:21:40.490 "r_mbytes_per_sec": 0, 00:21:40.490 "w_mbytes_per_sec": 0 00:21:40.490 }, 00:21:40.490 "claimed": false, 00:21:40.490 "zoned": false, 00:21:40.490 "supported_io_types": { 00:21:40.490 "read": true, 00:21:40.490 "write": true, 00:21:40.490 "unmap": false, 00:21:40.490 "flush": false, 00:21:40.490 "reset": true, 00:21:40.490 "nvme_admin": false, 00:21:40.490 "nvme_io": false, 00:21:40.490 "nvme_io_md": false, 00:21:40.490 "write_zeroes": true, 00:21:40.490 "zcopy": false, 00:21:40.490 "get_zone_info": false, 00:21:40.490 "zone_management": false, 00:21:40.490 "zone_append": false, 00:21:40.490 "compare": false, 00:21:40.490 "compare_and_write": false, 00:21:40.490 "abort": false, 00:21:40.490 "seek_hole": false, 00:21:40.491 "seek_data": false, 00:21:40.491 "copy": false, 00:21:40.491 "nvme_iov_md": false 00:21:40.491 }, 00:21:40.491 "memory_domains": [ 00:21:40.491 { 00:21:40.491 "dma_device_id": "system", 00:21:40.491 "dma_device_type": 1 00:21:40.491 }, 00:21:40.491 { 00:21:40.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.491 "dma_device_type": 2 00:21:40.491 }, 00:21:40.491 { 00:21:40.491 "dma_device_id": "system", 00:21:40.491 "dma_device_type": 1 00:21:40.491 }, 00:21:40.491 { 00:21:40.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.491 "dma_device_type": 2 00:21:40.491 } 00:21:40.491 ], 00:21:40.491 "driver_specific": { 00:21:40.491 "raid": { 00:21:40.491 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:40.491 "strip_size_kb": 0, 00:21:40.491 "state": "online", 00:21:40.491 "raid_level": "raid1", 00:21:40.491 "superblock": true, 00:21:40.491 "num_base_bdevs": 2, 00:21:40.491 "num_base_bdevs_discovered": 2, 00:21:40.491 "num_base_bdevs_operational": 2, 00:21:40.491 "base_bdevs_list": [ 00:21:40.491 { 00:21:40.491 "name": "pt1", 00:21:40.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.491 "is_configured": true, 00:21:40.491 "data_offset": 256, 00:21:40.491 "data_size": 7936 00:21:40.491 }, 00:21:40.491 { 00:21:40.491 "name": "pt2", 00:21:40.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.491 "is_configured": true, 00:21:40.491 "data_offset": 256, 00:21:40.491 "data_size": 7936 00:21:40.491 } 00:21:40.491 ] 00:21:40.491 } 00:21:40.491 } 00:21:40.491 }' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:40.491 pt2' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.491 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 [2024-12-05 19:41:33.953416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fcd4aef3-f995-4af9-8f97-66923c3ce215 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fcd4aef3-f995-4af9-8f97-66923c3ce215 ']' 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 [2024-12-05 19:41:33.997107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.751 [2024-12-05 19:41:33.997138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.751 [2024-12-05 19:41:33.997255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.751 [2024-12-05 19:41:33.997334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.751 [2024-12-05 19:41:33.997355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 [2024-12-05 19:41:34.125181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:40.751 [2024-12-05 19:41:34.127755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:40.751 [2024-12-05 19:41:34.127868] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:40.751 [2024-12-05 19:41:34.127955] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:40.751 [2024-12-05 19:41:34.127983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.751 [2024-12-05 19:41:34.128000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:40.751 request: 00:21:40.751 { 00:21:40.751 "name": "raid_bdev1", 00:21:40.751 "raid_level": "raid1", 00:21:40.751 "base_bdevs": [ 00:21:40.751 "malloc1", 00:21:40.751 "malloc2" 00:21:40.751 ], 00:21:40.751 "superblock": false, 00:21:40.751 "method": "bdev_raid_create", 00:21:40.751 "req_id": 1 00:21:40.751 } 00:21:40.751 Got JSON-RPC error response 00:21:40.751 response: 00:21:40.751 { 00:21:40.751 "code": -17, 00:21:40.751 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:40.751 } 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.751 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.010 [2024-12-05 19:41:34.205192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:41.010 [2024-12-05 19:41:34.205270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.010 [2024-12-05 19:41:34.205302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:41.010 [2024-12-05 19:41:34.205320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.010 [2024-12-05 19:41:34.208323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.010 [2024-12-05 19:41:34.208373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:41.010 [2024-12-05 19:41:34.208481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:41.010 [2024-12-05 19:41:34.208561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:41.010 pt1 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.010 "name": "raid_bdev1", 00:21:41.010 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:41.010 "strip_size_kb": 0, 00:21:41.010 "state": "configuring", 00:21:41.010 "raid_level": "raid1", 00:21:41.010 "superblock": true, 00:21:41.010 "num_base_bdevs": 2, 00:21:41.010 "num_base_bdevs_discovered": 1, 00:21:41.010 "num_base_bdevs_operational": 2, 00:21:41.010 "base_bdevs_list": [ 00:21:41.010 { 00:21:41.010 "name": "pt1", 00:21:41.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.010 "is_configured": true, 00:21:41.010 "data_offset": 256, 00:21:41.010 "data_size": 7936 00:21:41.010 }, 00:21:41.010 { 00:21:41.010 "name": null, 00:21:41.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.010 "is_configured": false, 00:21:41.010 "data_offset": 256, 00:21:41.010 "data_size": 7936 00:21:41.010 } 00:21:41.010 ] 00:21:41.010 }' 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.010 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.270 [2024-12-05 19:41:34.705381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:41.270 [2024-12-05 19:41:34.705467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.270 [2024-12-05 19:41:34.705500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:41.270 [2024-12-05 19:41:34.705518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.270 [2024-12-05 19:41:34.706102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.270 [2024-12-05 19:41:34.706145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:41.270 [2024-12-05 19:41:34.706250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:41.270 [2024-12-05 19:41:34.706293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.270 [2024-12-05 19:41:34.706447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:41.270 [2024-12-05 19:41:34.706487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:41.270 [2024-12-05 19:41:34.706816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:41.270 [2024-12-05 19:41:34.707017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:41.270 [2024-12-05 19:41:34.707033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:41.270 [2024-12-05 19:41:34.707210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.270 pt2 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.270 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.528 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.528 "name": "raid_bdev1", 00:21:41.528 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:41.529 "strip_size_kb": 0, 00:21:41.529 "state": "online", 00:21:41.529 "raid_level": "raid1", 00:21:41.529 "superblock": true, 00:21:41.529 "num_base_bdevs": 2, 00:21:41.529 "num_base_bdevs_discovered": 2, 00:21:41.529 "num_base_bdevs_operational": 2, 00:21:41.529 "base_bdevs_list": [ 00:21:41.529 { 00:21:41.529 "name": "pt1", 00:21:41.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.529 "is_configured": true, 00:21:41.529 "data_offset": 256, 00:21:41.529 "data_size": 7936 00:21:41.529 }, 00:21:41.529 { 00:21:41.529 "name": "pt2", 00:21:41.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.529 "is_configured": true, 00:21:41.529 "data_offset": 256, 00:21:41.529 "data_size": 7936 00:21:41.529 } 00:21:41.529 ] 00:21:41.529 }' 00:21:41.529 19:41:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.529 19:41:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.119 [2024-12-05 19:41:35.241877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.119 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:42.119 "name": "raid_bdev1", 00:21:42.120 "aliases": [ 00:21:42.120 "fcd4aef3-f995-4af9-8f97-66923c3ce215" 00:21:42.120 ], 00:21:42.120 "product_name": "Raid Volume", 00:21:42.120 "block_size": 4096, 00:21:42.120 "num_blocks": 7936, 00:21:42.120 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:42.120 "assigned_rate_limits": { 00:21:42.120 "rw_ios_per_sec": 0, 00:21:42.120 "rw_mbytes_per_sec": 0, 00:21:42.120 "r_mbytes_per_sec": 0, 00:21:42.120 "w_mbytes_per_sec": 0 00:21:42.120 }, 00:21:42.120 "claimed": false, 00:21:42.120 "zoned": false, 00:21:42.120 "supported_io_types": { 00:21:42.120 "read": true, 00:21:42.120 "write": true, 00:21:42.120 "unmap": false, 00:21:42.120 "flush": false, 00:21:42.120 "reset": true, 00:21:42.120 "nvme_admin": false, 00:21:42.120 "nvme_io": false, 00:21:42.120 "nvme_io_md": false, 00:21:42.120 "write_zeroes": true, 00:21:42.120 "zcopy": false, 00:21:42.120 "get_zone_info": false, 00:21:42.120 "zone_management": false, 00:21:42.120 "zone_append": false, 00:21:42.120 "compare": false, 00:21:42.120 "compare_and_write": false, 00:21:42.120 "abort": false, 00:21:42.120 "seek_hole": false, 00:21:42.120 "seek_data": false, 00:21:42.120 "copy": false, 00:21:42.120 "nvme_iov_md": false 00:21:42.120 }, 00:21:42.120 "memory_domains": [ 00:21:42.120 { 00:21:42.120 "dma_device_id": "system", 00:21:42.120 "dma_device_type": 1 00:21:42.120 }, 00:21:42.120 { 00:21:42.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.120 "dma_device_type": 2 00:21:42.120 }, 00:21:42.120 { 00:21:42.120 "dma_device_id": "system", 00:21:42.120 "dma_device_type": 1 00:21:42.120 }, 00:21:42.120 { 00:21:42.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.120 "dma_device_type": 2 00:21:42.120 } 00:21:42.120 ], 00:21:42.120 "driver_specific": { 00:21:42.120 "raid": { 00:21:42.120 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:42.120 "strip_size_kb": 0, 00:21:42.120 "state": "online", 00:21:42.120 "raid_level": "raid1", 00:21:42.120 "superblock": true, 00:21:42.120 "num_base_bdevs": 2, 00:21:42.120 "num_base_bdevs_discovered": 2, 00:21:42.120 "num_base_bdevs_operational": 2, 00:21:42.120 "base_bdevs_list": [ 00:21:42.120 { 00:21:42.120 "name": "pt1", 00:21:42.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.120 "is_configured": true, 00:21:42.120 "data_offset": 256, 00:21:42.120 "data_size": 7936 00:21:42.120 }, 00:21:42.120 { 00:21:42.120 "name": "pt2", 00:21:42.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.120 "is_configured": true, 00:21:42.120 "data_offset": 256, 00:21:42.120 "data_size": 7936 00:21:42.120 } 00:21:42.120 ] 00:21:42.120 } 00:21:42.120 } 00:21:42.120 }' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:42.120 pt2' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:42.120 [2024-12-05 19:41:35.513909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.120 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fcd4aef3-f995-4af9-8f97-66923c3ce215 '!=' fcd4aef3-f995-4af9-8f97-66923c3ce215 ']' 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.379 [2024-12-05 19:41:35.565668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.379 "name": "raid_bdev1", 00:21:42.379 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:42.379 "strip_size_kb": 0, 00:21:42.379 "state": "online", 00:21:42.379 "raid_level": "raid1", 00:21:42.379 "superblock": true, 00:21:42.379 "num_base_bdevs": 2, 00:21:42.379 "num_base_bdevs_discovered": 1, 00:21:42.379 "num_base_bdevs_operational": 1, 00:21:42.379 "base_bdevs_list": [ 00:21:42.379 { 00:21:42.379 "name": null, 00:21:42.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.379 "is_configured": false, 00:21:42.379 "data_offset": 0, 00:21:42.379 "data_size": 7936 00:21:42.379 }, 00:21:42.379 { 00:21:42.379 "name": "pt2", 00:21:42.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.379 "is_configured": true, 00:21:42.379 "data_offset": 256, 00:21:42.379 "data_size": 7936 00:21:42.379 } 00:21:42.379 ] 00:21:42.379 }' 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.379 19:41:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.946 [2024-12-05 19:41:36.105806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.946 [2024-12-05 19:41:36.105852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.946 [2024-12-05 19:41:36.105954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.946 [2024-12-05 19:41:36.106023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.946 [2024-12-05 19:41:36.106044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.946 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.947 [2024-12-05 19:41:36.177815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.947 [2024-12-05 19:41:36.177894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.947 [2024-12-05 19:41:36.177921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:42.947 [2024-12-05 19:41:36.177938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.947 [2024-12-05 19:41:36.180908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.947 [2024-12-05 19:41:36.180960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.947 [2024-12-05 19:41:36.181100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.947 [2024-12-05 19:41:36.181179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.947 [2024-12-05 19:41:36.181325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:42.947 [2024-12-05 19:41:36.181358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:42.947 [2024-12-05 19:41:36.181658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:42.947 [2024-12-05 19:41:36.181890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:42.947 [2024-12-05 19:41:36.181917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:42.947 [2024-12-05 19:41:36.182157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.947 pt2 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.947 "name": "raid_bdev1", 00:21:42.947 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:42.947 "strip_size_kb": 0, 00:21:42.947 "state": "online", 00:21:42.947 "raid_level": "raid1", 00:21:42.947 "superblock": true, 00:21:42.947 "num_base_bdevs": 2, 00:21:42.947 "num_base_bdevs_discovered": 1, 00:21:42.947 "num_base_bdevs_operational": 1, 00:21:42.947 "base_bdevs_list": [ 00:21:42.947 { 00:21:42.947 "name": null, 00:21:42.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.947 "is_configured": false, 00:21:42.947 "data_offset": 256, 00:21:42.947 "data_size": 7936 00:21:42.947 }, 00:21:42.947 { 00:21:42.947 "name": "pt2", 00:21:42.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.947 "is_configured": true, 00:21:42.947 "data_offset": 256, 00:21:42.947 "data_size": 7936 00:21:42.947 } 00:21:42.947 ] 00:21:42.947 }' 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.947 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.515 [2024-12-05 19:41:36.722225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.515 [2024-12-05 19:41:36.722267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.515 [2024-12-05 19:41:36.722368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.515 [2024-12-05 19:41:36.722449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.515 [2024-12-05 19:41:36.722466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.515 [2024-12-05 19:41:36.802278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.515 [2024-12-05 19:41:36.802360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.515 [2024-12-05 19:41:36.802393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:43.515 [2024-12-05 19:41:36.802408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.515 [2024-12-05 19:41:36.805322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.515 [2024-12-05 19:41:36.805369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.515 [2024-12-05 19:41:36.805486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.515 [2024-12-05 19:41:36.805548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.515 [2024-12-05 19:41:36.805758] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:43.515 [2024-12-05 19:41:36.805787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.515 [2024-12-05 19:41:36.805814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:43.515 [2024-12-05 19:41:36.805888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.515 [2024-12-05 19:41:36.806002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:43.515 [2024-12-05 19:41:36.806018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:43.515 [2024-12-05 19:41:36.806338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:43.515 [2024-12-05 19:41:36.806543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:43.515 [2024-12-05 19:41:36.806575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:43.515 [2024-12-05 19:41:36.806834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.515 pt1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.515 "name": "raid_bdev1", 00:21:43.515 "uuid": "fcd4aef3-f995-4af9-8f97-66923c3ce215", 00:21:43.515 "strip_size_kb": 0, 00:21:43.515 "state": "online", 00:21:43.515 "raid_level": "raid1", 00:21:43.515 "superblock": true, 00:21:43.515 "num_base_bdevs": 2, 00:21:43.515 "num_base_bdevs_discovered": 1, 00:21:43.515 "num_base_bdevs_operational": 1, 00:21:43.515 "base_bdevs_list": [ 00:21:43.515 { 00:21:43.515 "name": null, 00:21:43.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.515 "is_configured": false, 00:21:43.515 "data_offset": 256, 00:21:43.515 "data_size": 7936 00:21:43.515 }, 00:21:43.515 { 00:21:43.515 "name": "pt2", 00:21:43.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.515 "is_configured": true, 00:21:43.515 "data_offset": 256, 00:21:43.515 "data_size": 7936 00:21:43.515 } 00:21:43.515 ] 00:21:43.515 }' 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.515 19:41:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.082 [2024-12-05 19:41:37.383209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' fcd4aef3-f995-4af9-8f97-66923c3ce215 '!=' fcd4aef3-f995-4af9-8f97-66923c3ce215 ']' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86621 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86621 ']' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86621 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86621 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.082 killing process with pid 86621 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86621' 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86621 00:21:44.082 [2024-12-05 19:41:37.462753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.082 19:41:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86621 00:21:44.082 [2024-12-05 19:41:37.462898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.082 [2024-12-05 19:41:37.462976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.083 [2024-12-05 19:41:37.463000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:44.342 [2024-12-05 19:41:37.650192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.279 19:41:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:45.279 00:21:45.279 real 0m6.700s 00:21:45.279 user 0m10.653s 00:21:45.279 sys 0m0.946s 00:21:45.279 19:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.279 19:41:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.279 ************************************ 00:21:45.279 END TEST raid_superblock_test_4k 00:21:45.279 ************************************ 00:21:45.538 19:41:38 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:45.538 19:41:38 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:45.538 19:41:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:45.538 19:41:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.538 19:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.538 ************************************ 00:21:45.538 START TEST raid_rebuild_test_sb_4k 00:21:45.538 ************************************ 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86950 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86950 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86950 ']' 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.538 19:41:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.538 [2024-12-05 19:41:38.862616] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:21:45.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:45.538 Zero copy mechanism will not be used. 00:21:45.538 [2024-12-05 19:41:38.862877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86950 ] 00:21:45.797 [2024-12-05 19:41:39.046517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.797 [2024-12-05 19:41:39.178137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.056 [2024-12-05 19:41:39.381596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.056 [2024-12-05 19:41:39.381683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.623 BaseBdev1_malloc 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.623 [2024-12-05 19:41:39.911593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:46.623 [2024-12-05 19:41:39.911673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.623 [2024-12-05 19:41:39.911724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:46.623 [2024-12-05 19:41:39.911748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.623 [2024-12-05 19:41:39.914578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.623 [2024-12-05 19:41:39.914632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:46.623 BaseBdev1 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.623 BaseBdev2_malloc 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.623 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.623 [2024-12-05 19:41:39.963897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:46.623 [2024-12-05 19:41:39.963997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.623 [2024-12-05 19:41:39.964033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:46.623 [2024-12-05 19:41:39.964052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.624 [2024-12-05 19:41:39.966830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.624 [2024-12-05 19:41:39.966882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:46.624 BaseBdev2 00:21:46.624 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.624 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:46.624 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.624 19:41:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.624 spare_malloc 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.624 spare_delay 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.624 [2024-12-05 19:41:40.031547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:46.624 [2024-12-05 19:41:40.031628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.624 [2024-12-05 19:41:40.031671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:46.624 [2024-12-05 19:41:40.031690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.624 [2024-12-05 19:41:40.034548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.624 [2024-12-05 19:41:40.034601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:46.624 spare 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.624 [2024-12-05 19:41:40.039634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.624 [2024-12-05 19:41:40.042132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.624 [2024-12-05 19:41:40.042393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:46.624 [2024-12-05 19:41:40.042417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:46.624 [2024-12-05 19:41:40.042772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:46.624 [2024-12-05 19:41:40.043012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:46.624 [2024-12-05 19:41:40.043038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:46.624 [2024-12-05 19:41:40.043248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.624 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.883 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.883 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.883 "name": "raid_bdev1", 00:21:46.883 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:46.883 "strip_size_kb": 0, 00:21:46.883 "state": "online", 00:21:46.883 "raid_level": "raid1", 00:21:46.883 "superblock": true, 00:21:46.883 "num_base_bdevs": 2, 00:21:46.883 "num_base_bdevs_discovered": 2, 00:21:46.883 "num_base_bdevs_operational": 2, 00:21:46.883 "base_bdevs_list": [ 00:21:46.883 { 00:21:46.883 "name": "BaseBdev1", 00:21:46.883 "uuid": "27fd7625-1656-5a20-9d9a-6613d9d416d5", 00:21:46.883 "is_configured": true, 00:21:46.883 "data_offset": 256, 00:21:46.884 "data_size": 7936 00:21:46.884 }, 00:21:46.884 { 00:21:46.884 "name": "BaseBdev2", 00:21:46.884 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:46.884 "is_configured": true, 00:21:46.884 "data_offset": 256, 00:21:46.884 "data_size": 7936 00:21:46.884 } 00:21:46.884 ] 00:21:46.884 }' 00:21:46.884 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.884 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.143 [2024-12-05 19:41:40.524144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.143 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.402 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:47.661 [2024-12-05 19:41:40.851947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:47.661 /dev/nbd0 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.661 1+0 records in 00:21:47.661 1+0 records out 00:21:47.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372157 s, 11.0 MB/s 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:47.661 19:41:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:48.597 7936+0 records in 00:21:48.597 7936+0 records out 00:21:48.597 32505856 bytes (33 MB, 31 MiB) copied, 0.897265 s, 36.2 MB/s 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.598 19:41:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:48.856 [2024-12-05 19:41:42.079312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.856 [2024-12-05 19:41:42.099445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.856 "name": "raid_bdev1", 00:21:48.856 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:48.856 "strip_size_kb": 0, 00:21:48.856 "state": "online", 00:21:48.856 "raid_level": "raid1", 00:21:48.856 "superblock": true, 00:21:48.856 "num_base_bdevs": 2, 00:21:48.856 "num_base_bdevs_discovered": 1, 00:21:48.856 "num_base_bdevs_operational": 1, 00:21:48.856 "base_bdevs_list": [ 00:21:48.856 { 00:21:48.856 "name": null, 00:21:48.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.856 "is_configured": false, 00:21:48.856 "data_offset": 0, 00:21:48.856 "data_size": 7936 00:21:48.856 }, 00:21:48.856 { 00:21:48.856 "name": "BaseBdev2", 00:21:48.856 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:48.856 "is_configured": true, 00:21:48.856 "data_offset": 256, 00:21:48.856 "data_size": 7936 00:21:48.856 } 00:21:48.856 ] 00:21:48.856 }' 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.856 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.428 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.428 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.428 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.428 [2024-12-05 19:41:42.631633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.428 [2024-12-05 19:41:42.648196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:49.428 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.428 19:41:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:49.428 [2024-12-05 19:41:42.650693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.363 "name": "raid_bdev1", 00:21:50.363 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:50.363 "strip_size_kb": 0, 00:21:50.363 "state": "online", 00:21:50.363 "raid_level": "raid1", 00:21:50.363 "superblock": true, 00:21:50.363 "num_base_bdevs": 2, 00:21:50.363 "num_base_bdevs_discovered": 2, 00:21:50.363 "num_base_bdevs_operational": 2, 00:21:50.363 "process": { 00:21:50.363 "type": "rebuild", 00:21:50.363 "target": "spare", 00:21:50.363 "progress": { 00:21:50.363 "blocks": 2560, 00:21:50.363 "percent": 32 00:21:50.363 } 00:21:50.363 }, 00:21:50.363 "base_bdevs_list": [ 00:21:50.363 { 00:21:50.363 "name": "spare", 00:21:50.363 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:50.363 "is_configured": true, 00:21:50.363 "data_offset": 256, 00:21:50.363 "data_size": 7936 00:21:50.363 }, 00:21:50.363 { 00:21:50.363 "name": "BaseBdev2", 00:21:50.363 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:50.363 "is_configured": true, 00:21:50.363 "data_offset": 256, 00:21:50.363 "data_size": 7936 00:21:50.363 } 00:21:50.363 ] 00:21:50.363 }' 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.363 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.622 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.623 [2024-12-05 19:41:43.815749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.623 [2024-12-05 19:41:43.859788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:50.623 [2024-12-05 19:41:43.859914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.623 [2024-12-05 19:41:43.859958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.623 [2024-12-05 19:41:43.859976] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.623 "name": "raid_bdev1", 00:21:50.623 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:50.623 "strip_size_kb": 0, 00:21:50.623 "state": "online", 00:21:50.623 "raid_level": "raid1", 00:21:50.623 "superblock": true, 00:21:50.623 "num_base_bdevs": 2, 00:21:50.623 "num_base_bdevs_discovered": 1, 00:21:50.623 "num_base_bdevs_operational": 1, 00:21:50.623 "base_bdevs_list": [ 00:21:50.623 { 00:21:50.623 "name": null, 00:21:50.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.623 "is_configured": false, 00:21:50.623 "data_offset": 0, 00:21:50.623 "data_size": 7936 00:21:50.623 }, 00:21:50.623 { 00:21:50.623 "name": "BaseBdev2", 00:21:50.623 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:50.623 "is_configured": true, 00:21:50.623 "data_offset": 256, 00:21:50.623 "data_size": 7936 00:21:50.623 } 00:21:50.623 ] 00:21:50.623 }' 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.623 19:41:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.189 "name": "raid_bdev1", 00:21:51.189 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:51.189 "strip_size_kb": 0, 00:21:51.189 "state": "online", 00:21:51.189 "raid_level": "raid1", 00:21:51.189 "superblock": true, 00:21:51.189 "num_base_bdevs": 2, 00:21:51.189 "num_base_bdevs_discovered": 1, 00:21:51.189 "num_base_bdevs_operational": 1, 00:21:51.189 "base_bdevs_list": [ 00:21:51.189 { 00:21:51.189 "name": null, 00:21:51.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.189 "is_configured": false, 00:21:51.189 "data_offset": 0, 00:21:51.189 "data_size": 7936 00:21:51.189 }, 00:21:51.189 { 00:21:51.189 "name": "BaseBdev2", 00:21:51.189 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:51.189 "is_configured": true, 00:21:51.189 "data_offset": 256, 00:21:51.189 "data_size": 7936 00:21:51.189 } 00:21:51.189 ] 00:21:51.189 }' 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.189 [2024-12-05 19:41:44.572356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.189 [2024-12-05 19:41:44.588423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.189 19:41:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:51.189 [2024-12-05 19:41:44.591105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.567 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.567 "name": "raid_bdev1", 00:21:52.567 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:52.567 "strip_size_kb": 0, 00:21:52.567 "state": "online", 00:21:52.567 "raid_level": "raid1", 00:21:52.567 "superblock": true, 00:21:52.567 "num_base_bdevs": 2, 00:21:52.567 "num_base_bdevs_discovered": 2, 00:21:52.567 "num_base_bdevs_operational": 2, 00:21:52.567 "process": { 00:21:52.567 "type": "rebuild", 00:21:52.567 "target": "spare", 00:21:52.567 "progress": { 00:21:52.567 "blocks": 2560, 00:21:52.567 "percent": 32 00:21:52.567 } 00:21:52.567 }, 00:21:52.567 "base_bdevs_list": [ 00:21:52.567 { 00:21:52.567 "name": "spare", 00:21:52.568 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:52.568 "is_configured": true, 00:21:52.568 "data_offset": 256, 00:21:52.568 "data_size": 7936 00:21:52.568 }, 00:21:52.568 { 00:21:52.568 "name": "BaseBdev2", 00:21:52.568 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:52.568 "is_configured": true, 00:21:52.568 "data_offset": 256, 00:21:52.568 "data_size": 7936 00:21:52.568 } 00:21:52.568 ] 00:21:52.568 }' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:52.568 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=739 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.568 "name": "raid_bdev1", 00:21:52.568 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:52.568 "strip_size_kb": 0, 00:21:52.568 "state": "online", 00:21:52.568 "raid_level": "raid1", 00:21:52.568 "superblock": true, 00:21:52.568 "num_base_bdevs": 2, 00:21:52.568 "num_base_bdevs_discovered": 2, 00:21:52.568 "num_base_bdevs_operational": 2, 00:21:52.568 "process": { 00:21:52.568 "type": "rebuild", 00:21:52.568 "target": "spare", 00:21:52.568 "progress": { 00:21:52.568 "blocks": 2816, 00:21:52.568 "percent": 35 00:21:52.568 } 00:21:52.568 }, 00:21:52.568 "base_bdevs_list": [ 00:21:52.568 { 00:21:52.568 "name": "spare", 00:21:52.568 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:52.568 "is_configured": true, 00:21:52.568 "data_offset": 256, 00:21:52.568 "data_size": 7936 00:21:52.568 }, 00:21:52.568 { 00:21:52.568 "name": "BaseBdev2", 00:21:52.568 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:52.568 "is_configured": true, 00:21:52.568 "data_offset": 256, 00:21:52.568 "data_size": 7936 00:21:52.568 } 00:21:52.568 ] 00:21:52.568 }' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.568 19:41:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.501 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.760 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.760 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.760 "name": "raid_bdev1", 00:21:53.760 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:53.760 "strip_size_kb": 0, 00:21:53.760 "state": "online", 00:21:53.760 "raid_level": "raid1", 00:21:53.760 "superblock": true, 00:21:53.760 "num_base_bdevs": 2, 00:21:53.760 "num_base_bdevs_discovered": 2, 00:21:53.760 "num_base_bdevs_operational": 2, 00:21:53.760 "process": { 00:21:53.760 "type": "rebuild", 00:21:53.760 "target": "spare", 00:21:53.760 "progress": { 00:21:53.760 "blocks": 5888, 00:21:53.760 "percent": 74 00:21:53.760 } 00:21:53.760 }, 00:21:53.760 "base_bdevs_list": [ 00:21:53.760 { 00:21:53.760 "name": "spare", 00:21:53.760 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:53.760 "is_configured": true, 00:21:53.760 "data_offset": 256, 00:21:53.760 "data_size": 7936 00:21:53.760 }, 00:21:53.760 { 00:21:53.760 "name": "BaseBdev2", 00:21:53.760 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:53.760 "is_configured": true, 00:21:53.760 "data_offset": 256, 00:21:53.760 "data_size": 7936 00:21:53.760 } 00:21:53.760 ] 00:21:53.760 }' 00:21:53.760 19:41:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.760 19:41:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.760 19:41:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.760 19:41:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.760 19:41:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.327 [2024-12-05 19:41:47.716538] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:54.327 [2024-12-05 19:41:47.716670] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:54.327 [2024-12-05 19:41:47.716877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.894 "name": "raid_bdev1", 00:21:54.894 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:54.894 "strip_size_kb": 0, 00:21:54.894 "state": "online", 00:21:54.894 "raid_level": "raid1", 00:21:54.894 "superblock": true, 00:21:54.894 "num_base_bdevs": 2, 00:21:54.894 "num_base_bdevs_discovered": 2, 00:21:54.894 "num_base_bdevs_operational": 2, 00:21:54.894 "base_bdevs_list": [ 00:21:54.894 { 00:21:54.894 "name": "spare", 00:21:54.894 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:54.894 "is_configured": true, 00:21:54.894 "data_offset": 256, 00:21:54.894 "data_size": 7936 00:21:54.894 }, 00:21:54.894 { 00:21:54.894 "name": "BaseBdev2", 00:21:54.894 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:54.894 "is_configured": true, 00:21:54.894 "data_offset": 256, 00:21:54.894 "data_size": 7936 00:21:54.894 } 00:21:54.894 ] 00:21:54.894 }' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.894 "name": "raid_bdev1", 00:21:54.894 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:54.894 "strip_size_kb": 0, 00:21:54.894 "state": "online", 00:21:54.894 "raid_level": "raid1", 00:21:54.894 "superblock": true, 00:21:54.894 "num_base_bdevs": 2, 00:21:54.894 "num_base_bdevs_discovered": 2, 00:21:54.894 "num_base_bdevs_operational": 2, 00:21:54.894 "base_bdevs_list": [ 00:21:54.894 { 00:21:54.894 "name": "spare", 00:21:54.894 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:54.894 "is_configured": true, 00:21:54.894 "data_offset": 256, 00:21:54.894 "data_size": 7936 00:21:54.894 }, 00:21:54.894 { 00:21:54.894 "name": "BaseBdev2", 00:21:54.894 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:54.894 "is_configured": true, 00:21:54.894 "data_offset": 256, 00:21:54.894 "data_size": 7936 00:21:54.894 } 00:21:54.894 ] 00:21:54.894 }' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.894 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.153 "name": "raid_bdev1", 00:21:55.153 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:55.153 "strip_size_kb": 0, 00:21:55.153 "state": "online", 00:21:55.153 "raid_level": "raid1", 00:21:55.153 "superblock": true, 00:21:55.153 "num_base_bdevs": 2, 00:21:55.153 "num_base_bdevs_discovered": 2, 00:21:55.153 "num_base_bdevs_operational": 2, 00:21:55.153 "base_bdevs_list": [ 00:21:55.153 { 00:21:55.153 "name": "spare", 00:21:55.153 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:55.153 "is_configured": true, 00:21:55.153 "data_offset": 256, 00:21:55.153 "data_size": 7936 00:21:55.153 }, 00:21:55.153 { 00:21:55.153 "name": "BaseBdev2", 00:21:55.153 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:55.153 "is_configured": true, 00:21:55.153 "data_offset": 256, 00:21:55.153 "data_size": 7936 00:21:55.153 } 00:21:55.153 ] 00:21:55.153 }' 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.153 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.718 [2024-12-05 19:41:48.903129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.718 [2024-12-05 19:41:48.903174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.718 [2024-12-05 19:41:48.903281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.718 [2024-12-05 19:41:48.903381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.718 [2024-12-05 19:41:48.903402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.718 19:41:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:55.976 /dev/nbd0 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:55.976 1+0 records in 00:21:55.976 1+0 records out 00:21:55.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298935 s, 13.7 MB/s 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.976 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:56.235 /dev/nbd1 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.235 1+0 records in 00:21:56.235 1+0 records out 00:21:56.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004755 s, 8.6 MB/s 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.235 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.494 19:41:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.752 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.011 [2024-12-05 19:41:50.425108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:57.011 [2024-12-05 19:41:50.425179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.011 [2024-12-05 19:41:50.425218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:57.011 [2024-12-05 19:41:50.425236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.011 [2024-12-05 19:41:50.428164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.011 [2024-12-05 19:41:50.428211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:57.011 [2024-12-05 19:41:50.428343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:57.011 [2024-12-05 19:41:50.428425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:57.011 [2024-12-05 19:41:50.428626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.011 spare 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.011 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.268 [2024-12-05 19:41:50.528796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:57.268 [2024-12-05 19:41:50.528856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:57.268 [2024-12-05 19:41:50.529293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:57.268 [2024-12-05 19:41:50.529603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:57.268 [2024-12-05 19:41:50.529637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:57.268 [2024-12-05 19:41:50.529925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.268 "name": "raid_bdev1", 00:21:57.268 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:57.268 "strip_size_kb": 0, 00:21:57.268 "state": "online", 00:21:57.268 "raid_level": "raid1", 00:21:57.268 "superblock": true, 00:21:57.268 "num_base_bdevs": 2, 00:21:57.268 "num_base_bdevs_discovered": 2, 00:21:57.268 "num_base_bdevs_operational": 2, 00:21:57.268 "base_bdevs_list": [ 00:21:57.268 { 00:21:57.268 "name": "spare", 00:21:57.268 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:57.268 "is_configured": true, 00:21:57.268 "data_offset": 256, 00:21:57.268 "data_size": 7936 00:21:57.268 }, 00:21:57.268 { 00:21:57.268 "name": "BaseBdev2", 00:21:57.268 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:57.268 "is_configured": true, 00:21:57.268 "data_offset": 256, 00:21:57.268 "data_size": 7936 00:21:57.268 } 00:21:57.268 ] 00:21:57.268 }' 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.268 19:41:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.835 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.835 "name": "raid_bdev1", 00:21:57.836 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:57.836 "strip_size_kb": 0, 00:21:57.836 "state": "online", 00:21:57.836 "raid_level": "raid1", 00:21:57.836 "superblock": true, 00:21:57.836 "num_base_bdevs": 2, 00:21:57.836 "num_base_bdevs_discovered": 2, 00:21:57.836 "num_base_bdevs_operational": 2, 00:21:57.836 "base_bdevs_list": [ 00:21:57.836 { 00:21:57.836 "name": "spare", 00:21:57.836 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:57.836 "is_configured": true, 00:21:57.836 "data_offset": 256, 00:21:57.836 "data_size": 7936 00:21:57.836 }, 00:21:57.836 { 00:21:57.836 "name": "BaseBdev2", 00:21:57.836 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:57.836 "is_configured": true, 00:21:57.836 "data_offset": 256, 00:21:57.836 "data_size": 7936 00:21:57.836 } 00:21:57.836 ] 00:21:57.836 }' 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.836 [2024-12-05 19:41:51.262093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.836 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.094 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.094 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.094 "name": "raid_bdev1", 00:21:58.094 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:58.094 "strip_size_kb": 0, 00:21:58.094 "state": "online", 00:21:58.094 "raid_level": "raid1", 00:21:58.094 "superblock": true, 00:21:58.094 "num_base_bdevs": 2, 00:21:58.094 "num_base_bdevs_discovered": 1, 00:21:58.094 "num_base_bdevs_operational": 1, 00:21:58.094 "base_bdevs_list": [ 00:21:58.094 { 00:21:58.094 "name": null, 00:21:58.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.094 "is_configured": false, 00:21:58.094 "data_offset": 0, 00:21:58.094 "data_size": 7936 00:21:58.094 }, 00:21:58.094 { 00:21:58.094 "name": "BaseBdev2", 00:21:58.094 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:58.094 "is_configured": true, 00:21:58.094 "data_offset": 256, 00:21:58.094 "data_size": 7936 00:21:58.094 } 00:21:58.094 ] 00:21:58.094 }' 00:21:58.094 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.094 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.352 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:58.352 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.352 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.352 [2024-12-05 19:41:51.742250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.352 [2024-12-05 19:41:51.742516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:58.352 [2024-12-05 19:41:51.742544] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:58.352 [2024-12-05 19:41:51.742596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.352 [2024-12-05 19:41:51.758264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:58.353 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.353 19:41:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:58.353 [2024-12-05 19:41:51.760878] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.350 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.622 "name": "raid_bdev1", 00:21:59.622 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:59.622 "strip_size_kb": 0, 00:21:59.622 "state": "online", 00:21:59.622 "raid_level": "raid1", 00:21:59.622 "superblock": true, 00:21:59.622 "num_base_bdevs": 2, 00:21:59.622 "num_base_bdevs_discovered": 2, 00:21:59.622 "num_base_bdevs_operational": 2, 00:21:59.622 "process": { 00:21:59.622 "type": "rebuild", 00:21:59.622 "target": "spare", 00:21:59.622 "progress": { 00:21:59.622 "blocks": 2560, 00:21:59.622 "percent": 32 00:21:59.622 } 00:21:59.622 }, 00:21:59.622 "base_bdevs_list": [ 00:21:59.622 { 00:21:59.622 "name": "spare", 00:21:59.622 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:21:59.622 "is_configured": true, 00:21:59.622 "data_offset": 256, 00:21:59.622 "data_size": 7936 00:21:59.622 }, 00:21:59.622 { 00:21:59.622 "name": "BaseBdev2", 00:21:59.622 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:59.622 "is_configured": true, 00:21:59.622 "data_offset": 256, 00:21:59.622 "data_size": 7936 00:21:59.622 } 00:21:59.622 ] 00:21:59.622 }' 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.622 19:41:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 [2024-12-05 19:41:52.926072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.622 [2024-12-05 19:41:52.970009] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:59.622 [2024-12-05 19:41:52.970133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.622 [2024-12-05 19:41:52.970160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.622 [2024-12-05 19:41:52.970175] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.622 "name": "raid_bdev1", 00:21:59.622 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:21:59.622 "strip_size_kb": 0, 00:21:59.622 "state": "online", 00:21:59.622 "raid_level": "raid1", 00:21:59.622 "superblock": true, 00:21:59.622 "num_base_bdevs": 2, 00:21:59.622 "num_base_bdevs_discovered": 1, 00:21:59.622 "num_base_bdevs_operational": 1, 00:21:59.622 "base_bdevs_list": [ 00:21:59.622 { 00:21:59.622 "name": null, 00:21:59.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.622 "is_configured": false, 00:21:59.622 "data_offset": 0, 00:21:59.622 "data_size": 7936 00:21:59.622 }, 00:21:59.622 { 00:21:59.622 "name": "BaseBdev2", 00:21:59.622 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:21:59.622 "is_configured": true, 00:21:59.622 "data_offset": 256, 00:21:59.622 "data_size": 7936 00:21:59.622 } 00:21:59.622 ] 00:21:59.622 }' 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.622 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.190 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:00.190 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.190 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.190 [2024-12-05 19:41:53.486000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:00.190 [2024-12-05 19:41:53.486086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.190 [2024-12-05 19:41:53.486122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:00.190 [2024-12-05 19:41:53.486142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.190 [2024-12-05 19:41:53.486817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.190 [2024-12-05 19:41:53.486868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:00.190 [2024-12-05 19:41:53.486994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:00.190 [2024-12-05 19:41:53.487021] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:00.190 [2024-12-05 19:41:53.487036] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:00.190 [2024-12-05 19:41:53.487076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:00.190 [2024-12-05 19:41:53.502671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:00.190 spare 00:22:00.190 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.190 19:41:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:00.190 [2024-12-05 19:41:53.505241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.127 "name": "raid_bdev1", 00:22:01.127 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:01.127 "strip_size_kb": 0, 00:22:01.127 "state": "online", 00:22:01.127 "raid_level": "raid1", 00:22:01.127 "superblock": true, 00:22:01.127 "num_base_bdevs": 2, 00:22:01.127 "num_base_bdevs_discovered": 2, 00:22:01.127 "num_base_bdevs_operational": 2, 00:22:01.127 "process": { 00:22:01.127 "type": "rebuild", 00:22:01.127 "target": "spare", 00:22:01.127 "progress": { 00:22:01.127 "blocks": 2560, 00:22:01.127 "percent": 32 00:22:01.127 } 00:22:01.127 }, 00:22:01.127 "base_bdevs_list": [ 00:22:01.127 { 00:22:01.127 "name": "spare", 00:22:01.127 "uuid": "76d0dbc1-9789-598e-9960-431b0e4cf860", 00:22:01.127 "is_configured": true, 00:22:01.127 "data_offset": 256, 00:22:01.127 "data_size": 7936 00:22:01.127 }, 00:22:01.127 { 00:22:01.127 "name": "BaseBdev2", 00:22:01.127 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:01.127 "is_configured": true, 00:22:01.127 "data_offset": 256, 00:22:01.127 "data_size": 7936 00:22:01.127 } 00:22:01.127 ] 00:22:01.127 }' 00:22:01.127 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.386 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.386 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.386 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.387 [2024-12-05 19:41:54.670456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.387 [2024-12-05 19:41:54.714404] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:01.387 [2024-12-05 19:41:54.714516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.387 [2024-12-05 19:41:54.714546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.387 [2024-12-05 19:41:54.714559] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.387 "name": "raid_bdev1", 00:22:01.387 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:01.387 "strip_size_kb": 0, 00:22:01.387 "state": "online", 00:22:01.387 "raid_level": "raid1", 00:22:01.387 "superblock": true, 00:22:01.387 "num_base_bdevs": 2, 00:22:01.387 "num_base_bdevs_discovered": 1, 00:22:01.387 "num_base_bdevs_operational": 1, 00:22:01.387 "base_bdevs_list": [ 00:22:01.387 { 00:22:01.387 "name": null, 00:22:01.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.387 "is_configured": false, 00:22:01.387 "data_offset": 0, 00:22:01.387 "data_size": 7936 00:22:01.387 }, 00:22:01.387 { 00:22:01.387 "name": "BaseBdev2", 00:22:01.387 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:01.387 "is_configured": true, 00:22:01.387 "data_offset": 256, 00:22:01.387 "data_size": 7936 00:22:01.387 } 00:22:01.387 ] 00:22:01.387 }' 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.387 19:41:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.954 "name": "raid_bdev1", 00:22:01.954 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:01.954 "strip_size_kb": 0, 00:22:01.954 "state": "online", 00:22:01.954 "raid_level": "raid1", 00:22:01.954 "superblock": true, 00:22:01.954 "num_base_bdevs": 2, 00:22:01.954 "num_base_bdevs_discovered": 1, 00:22:01.954 "num_base_bdevs_operational": 1, 00:22:01.954 "base_bdevs_list": [ 00:22:01.954 { 00:22:01.954 "name": null, 00:22:01.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.954 "is_configured": false, 00:22:01.954 "data_offset": 0, 00:22:01.954 "data_size": 7936 00:22:01.954 }, 00:22:01.954 { 00:22:01.954 "name": "BaseBdev2", 00:22:01.954 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:01.954 "is_configured": true, 00:22:01.954 "data_offset": 256, 00:22:01.954 "data_size": 7936 00:22:01.954 } 00:22:01.954 ] 00:22:01.954 }' 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:01.954 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.211 [2024-12-05 19:41:55.442494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:02.211 [2024-12-05 19:41:55.442577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.211 [2024-12-05 19:41:55.442621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:02.211 [2024-12-05 19:41:55.442650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.211 [2024-12-05 19:41:55.443257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.211 [2024-12-05 19:41:55.443302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.211 [2024-12-05 19:41:55.443415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:02.211 [2024-12-05 19:41:55.443447] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:02.211 [2024-12-05 19:41:55.443465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:02.211 [2024-12-05 19:41:55.443480] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:02.211 BaseBdev1 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.211 19:41:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.146 "name": "raid_bdev1", 00:22:03.146 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:03.146 "strip_size_kb": 0, 00:22:03.146 "state": "online", 00:22:03.146 "raid_level": "raid1", 00:22:03.146 "superblock": true, 00:22:03.146 "num_base_bdevs": 2, 00:22:03.146 "num_base_bdevs_discovered": 1, 00:22:03.146 "num_base_bdevs_operational": 1, 00:22:03.146 "base_bdevs_list": [ 00:22:03.146 { 00:22:03.146 "name": null, 00:22:03.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.146 "is_configured": false, 00:22:03.146 "data_offset": 0, 00:22:03.146 "data_size": 7936 00:22:03.146 }, 00:22:03.146 { 00:22:03.146 "name": "BaseBdev2", 00:22:03.146 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:03.146 "is_configured": true, 00:22:03.146 "data_offset": 256, 00:22:03.146 "data_size": 7936 00:22:03.146 } 00:22:03.146 ] 00:22:03.146 }' 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.146 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.715 19:41:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.715 "name": "raid_bdev1", 00:22:03.715 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:03.715 "strip_size_kb": 0, 00:22:03.715 "state": "online", 00:22:03.715 "raid_level": "raid1", 00:22:03.715 "superblock": true, 00:22:03.715 "num_base_bdevs": 2, 00:22:03.715 "num_base_bdevs_discovered": 1, 00:22:03.715 "num_base_bdevs_operational": 1, 00:22:03.715 "base_bdevs_list": [ 00:22:03.715 { 00:22:03.715 "name": null, 00:22:03.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.715 "is_configured": false, 00:22:03.715 "data_offset": 0, 00:22:03.715 "data_size": 7936 00:22:03.715 }, 00:22:03.715 { 00:22:03.715 "name": "BaseBdev2", 00:22:03.715 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:03.715 "is_configured": true, 00:22:03.715 "data_offset": 256, 00:22:03.715 "data_size": 7936 00:22:03.715 } 00:22:03.715 ] 00:22:03.715 }' 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.715 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.716 [2024-12-05 19:41:57.139087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.716 [2024-12-05 19:41:57.139315] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:03.716 [2024-12-05 19:41:57.139341] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:03.716 request: 00:22:03.716 { 00:22:03.716 "base_bdev": "BaseBdev1", 00:22:03.716 "raid_bdev": "raid_bdev1", 00:22:03.716 "method": "bdev_raid_add_base_bdev", 00:22:03.716 "req_id": 1 00:22:03.716 } 00:22:03.716 Got JSON-RPC error response 00:22:03.716 response: 00:22:03.716 { 00:22:03.716 "code": -22, 00:22:03.716 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:03.716 } 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.716 19:41:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.147 "name": "raid_bdev1", 00:22:05.147 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:05.147 "strip_size_kb": 0, 00:22:05.147 "state": "online", 00:22:05.147 "raid_level": "raid1", 00:22:05.147 "superblock": true, 00:22:05.147 "num_base_bdevs": 2, 00:22:05.147 "num_base_bdevs_discovered": 1, 00:22:05.147 "num_base_bdevs_operational": 1, 00:22:05.147 "base_bdevs_list": [ 00:22:05.147 { 00:22:05.147 "name": null, 00:22:05.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.147 "is_configured": false, 00:22:05.147 "data_offset": 0, 00:22:05.147 "data_size": 7936 00:22:05.147 }, 00:22:05.147 { 00:22:05.147 "name": "BaseBdev2", 00:22:05.147 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:05.147 "is_configured": true, 00:22:05.147 "data_offset": 256, 00:22:05.147 "data_size": 7936 00:22:05.147 } 00:22:05.147 ] 00:22:05.147 }' 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.147 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.406 "name": "raid_bdev1", 00:22:05.406 "uuid": "936a962e-1b92-4a20-8345-a2cefce15c40", 00:22:05.406 "strip_size_kb": 0, 00:22:05.406 "state": "online", 00:22:05.406 "raid_level": "raid1", 00:22:05.406 "superblock": true, 00:22:05.406 "num_base_bdevs": 2, 00:22:05.406 "num_base_bdevs_discovered": 1, 00:22:05.406 "num_base_bdevs_operational": 1, 00:22:05.406 "base_bdevs_list": [ 00:22:05.406 { 00:22:05.406 "name": null, 00:22:05.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.406 "is_configured": false, 00:22:05.406 "data_offset": 0, 00:22:05.406 "data_size": 7936 00:22:05.406 }, 00:22:05.406 { 00:22:05.406 "name": "BaseBdev2", 00:22:05.406 "uuid": "258cdbbb-187a-5205-8ebb-920a660b1f59", 00:22:05.406 "is_configured": true, 00:22:05.406 "data_offset": 256, 00:22:05.406 "data_size": 7936 00:22:05.406 } 00:22:05.406 ] 00:22:05.406 }' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86950 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86950 ']' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86950 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.406 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86950 00:22:05.665 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.665 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.665 killing process with pid 86950 00:22:05.665 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86950' 00:22:05.665 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.665 00:22:05.665 Latency(us) 00:22:05.665 [2024-12-05T19:41:59.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.665 [2024-12-05T19:41:59.106Z] =================================================================================================================== 00:22:05.665 [2024-12-05T19:41:59.106Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.665 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86950 00:22:05.665 [2024-12-05 19:41:58.858802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.665 19:41:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86950 00:22:05.665 [2024-12-05 19:41:58.858973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.665 [2024-12-05 19:41:58.859046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.665 [2024-12-05 19:41:58.859078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:05.924 [2024-12-05 19:41:59.131508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.859 19:42:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:06.859 00:22:06.859 real 0m21.423s 00:22:06.859 user 0m28.946s 00:22:06.859 sys 0m2.533s 00:22:06.859 19:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.859 ************************************ 00:22:06.859 END TEST raid_rebuild_test_sb_4k 00:22:06.859 ************************************ 00:22:06.859 19:42:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.859 19:42:00 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:06.859 19:42:00 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:06.859 19:42:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:06.859 19:42:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.859 19:42:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.859 ************************************ 00:22:06.859 START TEST raid_state_function_test_sb_md_separate 00:22:06.859 ************************************ 00:22:06.859 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:06.859 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:06.859 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:06.859 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:06.859 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87654 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:06.860 Process raid pid: 87654 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87654' 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87654 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87654 ']' 00:22:06.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.860 19:42:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:07.118 [2024-12-05 19:42:00.349339] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:07.118 [2024-12-05 19:42:00.349496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.118 [2024-12-05 19:42:00.531922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.377 [2024-12-05 19:42:00.688063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.635 [2024-12-05 19:42:00.908588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.635 [2024-12-05 19:42:00.908661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.201 [2024-12-05 19:42:01.339455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.201 [2024-12-05 19:42:01.339529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.201 [2024-12-05 19:42:01.339546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.201 [2024-12-05 19:42:01.339562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.201 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.202 "name": "Existed_Raid", 00:22:08.202 "uuid": "19a62246-1671-4d4e-87ca-dac9a5ae1622", 00:22:08.202 "strip_size_kb": 0, 00:22:08.202 "state": "configuring", 00:22:08.202 "raid_level": "raid1", 00:22:08.202 "superblock": true, 00:22:08.202 "num_base_bdevs": 2, 00:22:08.202 "num_base_bdevs_discovered": 0, 00:22:08.202 "num_base_bdevs_operational": 2, 00:22:08.202 "base_bdevs_list": [ 00:22:08.202 { 00:22:08.202 "name": "BaseBdev1", 00:22:08.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.202 "is_configured": false, 00:22:08.202 "data_offset": 0, 00:22:08.202 "data_size": 0 00:22:08.202 }, 00:22:08.202 { 00:22:08.202 "name": "BaseBdev2", 00:22:08.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.202 "is_configured": false, 00:22:08.202 "data_offset": 0, 00:22:08.202 "data_size": 0 00:22:08.202 } 00:22:08.202 ] 00:22:08.202 }' 00:22:08.202 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.202 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.460 [2024-12-05 19:42:01.859579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.460 [2024-12-05 19:42:01.859623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.460 [2024-12-05 19:42:01.867549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.460 [2024-12-05 19:42:01.867607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.460 [2024-12-05 19:42:01.867624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.460 [2024-12-05 19:42:01.867642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.460 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.719 [2024-12-05 19:42:01.913605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.719 BaseBdev1 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.719 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.719 [ 00:22:08.719 { 00:22:08.719 "name": "BaseBdev1", 00:22:08.719 "aliases": [ 00:22:08.719 "c718b419-1266-4580-8abc-a259eb4f3776" 00:22:08.719 ], 00:22:08.719 "product_name": "Malloc disk", 00:22:08.719 "block_size": 4096, 00:22:08.719 "num_blocks": 8192, 00:22:08.719 "uuid": "c718b419-1266-4580-8abc-a259eb4f3776", 00:22:08.719 "md_size": 32, 00:22:08.719 "md_interleave": false, 00:22:08.719 "dif_type": 0, 00:22:08.719 "assigned_rate_limits": { 00:22:08.719 "rw_ios_per_sec": 0, 00:22:08.719 "rw_mbytes_per_sec": 0, 00:22:08.719 "r_mbytes_per_sec": 0, 00:22:08.719 "w_mbytes_per_sec": 0 00:22:08.719 }, 00:22:08.719 "claimed": true, 00:22:08.719 "claim_type": "exclusive_write", 00:22:08.719 "zoned": false, 00:22:08.719 "supported_io_types": { 00:22:08.719 "read": true, 00:22:08.719 "write": true, 00:22:08.719 "unmap": true, 00:22:08.719 "flush": true, 00:22:08.719 "reset": true, 00:22:08.719 "nvme_admin": false, 00:22:08.719 "nvme_io": false, 00:22:08.719 "nvme_io_md": false, 00:22:08.719 "write_zeroes": true, 00:22:08.719 "zcopy": true, 00:22:08.719 "get_zone_info": false, 00:22:08.719 "zone_management": false, 00:22:08.720 "zone_append": false, 00:22:08.720 "compare": false, 00:22:08.720 "compare_and_write": false, 00:22:08.720 "abort": true, 00:22:08.720 "seek_hole": false, 00:22:08.720 "seek_data": false, 00:22:08.720 "copy": true, 00:22:08.720 "nvme_iov_md": false 00:22:08.720 }, 00:22:08.720 "memory_domains": [ 00:22:08.720 { 00:22:08.720 "dma_device_id": "system", 00:22:08.720 "dma_device_type": 1 00:22:08.720 }, 00:22:08.720 { 00:22:08.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.720 "dma_device_type": 2 00:22:08.720 } 00:22:08.720 ], 00:22:08.720 "driver_specific": {} 00:22:08.720 } 00:22:08.720 ] 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.720 "name": "Existed_Raid", 00:22:08.720 "uuid": "1e2694de-44df-48d6-8de4-daa05d995c26", 00:22:08.720 "strip_size_kb": 0, 00:22:08.720 "state": "configuring", 00:22:08.720 "raid_level": "raid1", 00:22:08.720 "superblock": true, 00:22:08.720 "num_base_bdevs": 2, 00:22:08.720 "num_base_bdevs_discovered": 1, 00:22:08.720 "num_base_bdevs_operational": 2, 00:22:08.720 "base_bdevs_list": [ 00:22:08.720 { 00:22:08.720 "name": "BaseBdev1", 00:22:08.720 "uuid": "c718b419-1266-4580-8abc-a259eb4f3776", 00:22:08.720 "is_configured": true, 00:22:08.720 "data_offset": 256, 00:22:08.720 "data_size": 7936 00:22:08.720 }, 00:22:08.720 { 00:22:08.720 "name": "BaseBdev2", 00:22:08.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.720 "is_configured": false, 00:22:08.720 "data_offset": 0, 00:22:08.720 "data_size": 0 00:22:08.720 } 00:22:08.720 ] 00:22:08.720 }' 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.720 19:42:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.288 [2024-12-05 19:42:02.437867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.288 [2024-12-05 19:42:02.438073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.288 [2024-12-05 19:42:02.445886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.288 [2024-12-05 19:42:02.448292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.288 [2024-12-05 19:42:02.448351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.288 "name": "Existed_Raid", 00:22:09.288 "uuid": "8e97ce37-d589-446d-bf9d-2728dd32a548", 00:22:09.288 "strip_size_kb": 0, 00:22:09.288 "state": "configuring", 00:22:09.288 "raid_level": "raid1", 00:22:09.288 "superblock": true, 00:22:09.288 "num_base_bdevs": 2, 00:22:09.288 "num_base_bdevs_discovered": 1, 00:22:09.288 "num_base_bdevs_operational": 2, 00:22:09.288 "base_bdevs_list": [ 00:22:09.288 { 00:22:09.288 "name": "BaseBdev1", 00:22:09.288 "uuid": "c718b419-1266-4580-8abc-a259eb4f3776", 00:22:09.288 "is_configured": true, 00:22:09.288 "data_offset": 256, 00:22:09.288 "data_size": 7936 00:22:09.288 }, 00:22:09.288 { 00:22:09.288 "name": "BaseBdev2", 00:22:09.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.288 "is_configured": false, 00:22:09.288 "data_offset": 0, 00:22:09.288 "data_size": 0 00:22:09.288 } 00:22:09.288 ] 00:22:09.288 }' 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.288 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.624 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:09.624 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.624 19:42:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.624 [2024-12-05 19:42:03.030221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.624 [2024-12-05 19:42:03.030516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.624 [2024-12-05 19:42:03.030539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:09.624 [2024-12-05 19:42:03.030639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:09.624 [2024-12-05 19:42:03.030836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.624 [2024-12-05 19:42:03.030857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:09.624 BaseBdev2 00:22:09.624 [2024-12-05 19:42:03.030976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.624 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.901 [ 00:22:09.901 { 00:22:09.901 "name": "BaseBdev2", 00:22:09.901 "aliases": [ 00:22:09.901 "70973e91-96d2-4727-9b63-d2936e378950" 00:22:09.901 ], 00:22:09.901 "product_name": "Malloc disk", 00:22:09.901 "block_size": 4096, 00:22:09.901 "num_blocks": 8192, 00:22:09.901 "uuid": "70973e91-96d2-4727-9b63-d2936e378950", 00:22:09.901 "md_size": 32, 00:22:09.901 "md_interleave": false, 00:22:09.901 "dif_type": 0, 00:22:09.901 "assigned_rate_limits": { 00:22:09.901 "rw_ios_per_sec": 0, 00:22:09.901 "rw_mbytes_per_sec": 0, 00:22:09.901 "r_mbytes_per_sec": 0, 00:22:09.901 "w_mbytes_per_sec": 0 00:22:09.901 }, 00:22:09.901 "claimed": true, 00:22:09.901 "claim_type": "exclusive_write", 00:22:09.901 "zoned": false, 00:22:09.901 "supported_io_types": { 00:22:09.901 "read": true, 00:22:09.901 "write": true, 00:22:09.901 "unmap": true, 00:22:09.901 "flush": true, 00:22:09.901 "reset": true, 00:22:09.901 "nvme_admin": false, 00:22:09.901 "nvme_io": false, 00:22:09.901 "nvme_io_md": false, 00:22:09.901 "write_zeroes": true, 00:22:09.901 "zcopy": true, 00:22:09.901 "get_zone_info": false, 00:22:09.901 "zone_management": false, 00:22:09.901 "zone_append": false, 00:22:09.901 "compare": false, 00:22:09.901 "compare_and_write": false, 00:22:09.901 "abort": true, 00:22:09.901 "seek_hole": false, 00:22:09.901 "seek_data": false, 00:22:09.901 "copy": true, 00:22:09.901 "nvme_iov_md": false 00:22:09.901 }, 00:22:09.901 "memory_domains": [ 00:22:09.901 { 00:22:09.901 "dma_device_id": "system", 00:22:09.901 "dma_device_type": 1 00:22:09.901 }, 00:22:09.901 { 00:22:09.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.901 "dma_device_type": 2 00:22:09.901 } 00:22:09.901 ], 00:22:09.901 "driver_specific": {} 00:22:09.901 } 00:22:09.901 ] 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.901 "name": "Existed_Raid", 00:22:09.901 "uuid": "8e97ce37-d589-446d-bf9d-2728dd32a548", 00:22:09.901 "strip_size_kb": 0, 00:22:09.901 "state": "online", 00:22:09.901 "raid_level": "raid1", 00:22:09.901 "superblock": true, 00:22:09.901 "num_base_bdevs": 2, 00:22:09.901 "num_base_bdevs_discovered": 2, 00:22:09.901 "num_base_bdevs_operational": 2, 00:22:09.901 "base_bdevs_list": [ 00:22:09.901 { 00:22:09.901 "name": "BaseBdev1", 00:22:09.901 "uuid": "c718b419-1266-4580-8abc-a259eb4f3776", 00:22:09.901 "is_configured": true, 00:22:09.901 "data_offset": 256, 00:22:09.901 "data_size": 7936 00:22:09.901 }, 00:22:09.901 { 00:22:09.901 "name": "BaseBdev2", 00:22:09.901 "uuid": "70973e91-96d2-4727-9b63-d2936e378950", 00:22:09.901 "is_configured": true, 00:22:09.901 "data_offset": 256, 00:22:09.901 "data_size": 7936 00:22:09.901 } 00:22:09.901 ] 00:22:09.901 }' 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.901 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:10.161 [2024-12-05 19:42:03.566883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.161 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.420 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.421 "name": "Existed_Raid", 00:22:10.421 "aliases": [ 00:22:10.421 "8e97ce37-d589-446d-bf9d-2728dd32a548" 00:22:10.421 ], 00:22:10.421 "product_name": "Raid Volume", 00:22:10.421 "block_size": 4096, 00:22:10.421 "num_blocks": 7936, 00:22:10.421 "uuid": "8e97ce37-d589-446d-bf9d-2728dd32a548", 00:22:10.421 "md_size": 32, 00:22:10.421 "md_interleave": false, 00:22:10.421 "dif_type": 0, 00:22:10.421 "assigned_rate_limits": { 00:22:10.421 "rw_ios_per_sec": 0, 00:22:10.421 "rw_mbytes_per_sec": 0, 00:22:10.421 "r_mbytes_per_sec": 0, 00:22:10.421 "w_mbytes_per_sec": 0 00:22:10.421 }, 00:22:10.421 "claimed": false, 00:22:10.421 "zoned": false, 00:22:10.421 "supported_io_types": { 00:22:10.421 "read": true, 00:22:10.421 "write": true, 00:22:10.421 "unmap": false, 00:22:10.421 "flush": false, 00:22:10.421 "reset": true, 00:22:10.421 "nvme_admin": false, 00:22:10.421 "nvme_io": false, 00:22:10.421 "nvme_io_md": false, 00:22:10.421 "write_zeroes": true, 00:22:10.421 "zcopy": false, 00:22:10.421 "get_zone_info": false, 00:22:10.421 "zone_management": false, 00:22:10.421 "zone_append": false, 00:22:10.421 "compare": false, 00:22:10.421 "compare_and_write": false, 00:22:10.421 "abort": false, 00:22:10.421 "seek_hole": false, 00:22:10.421 "seek_data": false, 00:22:10.421 "copy": false, 00:22:10.421 "nvme_iov_md": false 00:22:10.421 }, 00:22:10.421 "memory_domains": [ 00:22:10.421 { 00:22:10.421 "dma_device_id": "system", 00:22:10.421 "dma_device_type": 1 00:22:10.421 }, 00:22:10.421 { 00:22:10.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.421 "dma_device_type": 2 00:22:10.421 }, 00:22:10.421 { 00:22:10.421 "dma_device_id": "system", 00:22:10.421 "dma_device_type": 1 00:22:10.421 }, 00:22:10.421 { 00:22:10.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.421 "dma_device_type": 2 00:22:10.421 } 00:22:10.421 ], 00:22:10.421 "driver_specific": { 00:22:10.421 "raid": { 00:22:10.421 "uuid": "8e97ce37-d589-446d-bf9d-2728dd32a548", 00:22:10.421 "strip_size_kb": 0, 00:22:10.421 "state": "online", 00:22:10.421 "raid_level": "raid1", 00:22:10.421 "superblock": true, 00:22:10.421 "num_base_bdevs": 2, 00:22:10.421 "num_base_bdevs_discovered": 2, 00:22:10.421 "num_base_bdevs_operational": 2, 00:22:10.421 "base_bdevs_list": [ 00:22:10.421 { 00:22:10.421 "name": "BaseBdev1", 00:22:10.421 "uuid": "c718b419-1266-4580-8abc-a259eb4f3776", 00:22:10.421 "is_configured": true, 00:22:10.421 "data_offset": 256, 00:22:10.421 "data_size": 7936 00:22:10.421 }, 00:22:10.421 { 00:22:10.421 "name": "BaseBdev2", 00:22:10.421 "uuid": "70973e91-96d2-4727-9b63-d2936e378950", 00:22:10.421 "is_configured": true, 00:22:10.421 "data_offset": 256, 00:22:10.421 "data_size": 7936 00:22:10.421 } 00:22:10.421 ] 00:22:10.421 } 00:22:10.421 } 00:22:10.421 }' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:10.421 BaseBdev2' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.421 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.421 [2024-12-05 19:42:03.830600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.680 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.681 "name": "Existed_Raid", 00:22:10.681 "uuid": "8e97ce37-d589-446d-bf9d-2728dd32a548", 00:22:10.681 "strip_size_kb": 0, 00:22:10.681 "state": "online", 00:22:10.681 "raid_level": "raid1", 00:22:10.681 "superblock": true, 00:22:10.681 "num_base_bdevs": 2, 00:22:10.681 "num_base_bdevs_discovered": 1, 00:22:10.681 "num_base_bdevs_operational": 1, 00:22:10.681 "base_bdevs_list": [ 00:22:10.681 { 00:22:10.681 "name": null, 00:22:10.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.681 "is_configured": false, 00:22:10.681 "data_offset": 0, 00:22:10.681 "data_size": 7936 00:22:10.681 }, 00:22:10.681 { 00:22:10.681 "name": "BaseBdev2", 00:22:10.681 "uuid": "70973e91-96d2-4727-9b63-d2936e378950", 00:22:10.681 "is_configured": true, 00:22:10.681 "data_offset": 256, 00:22:10.681 "data_size": 7936 00:22:10.681 } 00:22:10.681 ] 00:22:10.681 }' 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.681 19:42:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.250 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.250 [2024-12-05 19:42:04.512185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.250 [2024-12-05 19:42:04.512323] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.250 [2024-12-05 19:42:04.606430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.250 [2024-12-05 19:42:04.606490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.251 [2024-12-05 19:42:04.606511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87654 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87654 ']' 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87654 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.251 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87654 00:22:11.510 killing process with pid 87654 00:22:11.510 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.510 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.510 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87654' 00:22:11.510 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87654 00:22:11.510 [2024-12-05 19:42:04.697471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.510 19:42:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87654 00:22:11.510 [2024-12-05 19:42:04.712104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.445 ************************************ 00:22:12.445 END TEST raid_state_function_test_sb_md_separate 00:22:12.445 ************************************ 00:22:12.445 19:42:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:12.445 00:22:12.445 real 0m5.532s 00:22:12.445 user 0m8.352s 00:22:12.445 sys 0m0.767s 00:22:12.445 19:42:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.445 19:42:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 19:42:05 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:12.445 19:42:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:12.445 19:42:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.445 19:42:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 ************************************ 00:22:12.445 START TEST raid_superblock_test_md_separate 00:22:12.445 ************************************ 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87905 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87905 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87905 ']' 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.445 19:42:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.703 [2024-12-05 19:42:05.917493] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:12.703 [2024-12-05 19:42:05.917879] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87905 ] 00:22:12.703 [2024-12-05 19:42:06.093194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.961 [2024-12-05 19:42:06.223016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.219 [2024-12-05 19:42:06.426019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.219 [2024-12-05 19:42:06.426096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:13.477 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.478 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.478 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.478 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:13.478 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.478 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.736 malloc1 00:22:13.736 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.736 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.736 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.737 [2024-12-05 19:42:06.940637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.737 [2024-12-05 19:42:06.940735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.737 [2024-12-05 19:42:06.940772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:13.737 [2024-12-05 19:42:06.940788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.737 [2024-12-05 19:42:06.943389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.737 [2024-12-05 19:42:06.943435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.737 pt1 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.737 malloc2 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.737 19:42:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.737 [2024-12-05 19:42:06.998044] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.737 [2024-12-05 19:42:06.998278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.737 [2024-12-05 19:42:06.998323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:13.737 [2024-12-05 19:42:06.998339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.737 [2024-12-05 19:42:07.000884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.737 [2024-12-05 19:42:07.000928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:13.737 pt2 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.737 [2024-12-05 19:42:07.010083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.737 [2024-12-05 19:42:07.012577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.737 [2024-12-05 19:42:07.012860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:13.737 [2024-12-05 19:42:07.012883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:13.737 [2024-12-05 19:42:07.012994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:13.737 [2024-12-05 19:42:07.013155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:13.737 [2024-12-05 19:42:07.013180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:13.737 [2024-12-05 19:42:07.013316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.737 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.737 "name": "raid_bdev1", 00:22:13.737 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:13.737 "strip_size_kb": 0, 00:22:13.737 "state": "online", 00:22:13.737 "raid_level": "raid1", 00:22:13.737 "superblock": true, 00:22:13.737 "num_base_bdevs": 2, 00:22:13.737 "num_base_bdevs_discovered": 2, 00:22:13.737 "num_base_bdevs_operational": 2, 00:22:13.737 "base_bdevs_list": [ 00:22:13.737 { 00:22:13.737 "name": "pt1", 00:22:13.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.737 "is_configured": true, 00:22:13.738 "data_offset": 256, 00:22:13.738 "data_size": 7936 00:22:13.738 }, 00:22:13.738 { 00:22:13.738 "name": "pt2", 00:22:13.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.738 "is_configured": true, 00:22:13.738 "data_offset": 256, 00:22:13.738 "data_size": 7936 00:22:13.738 } 00:22:13.738 ] 00:22:13.738 }' 00:22:13.738 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.738 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.305 [2024-12-05 19:42:07.514570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.305 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.305 "name": "raid_bdev1", 00:22:14.305 "aliases": [ 00:22:14.305 "1f168ae7-cc4a-4278-a08d-7a7f666a1326" 00:22:14.305 ], 00:22:14.305 "product_name": "Raid Volume", 00:22:14.305 "block_size": 4096, 00:22:14.305 "num_blocks": 7936, 00:22:14.305 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:14.305 "md_size": 32, 00:22:14.305 "md_interleave": false, 00:22:14.306 "dif_type": 0, 00:22:14.306 "assigned_rate_limits": { 00:22:14.306 "rw_ios_per_sec": 0, 00:22:14.306 "rw_mbytes_per_sec": 0, 00:22:14.306 "r_mbytes_per_sec": 0, 00:22:14.306 "w_mbytes_per_sec": 0 00:22:14.306 }, 00:22:14.306 "claimed": false, 00:22:14.306 "zoned": false, 00:22:14.306 "supported_io_types": { 00:22:14.306 "read": true, 00:22:14.306 "write": true, 00:22:14.306 "unmap": false, 00:22:14.306 "flush": false, 00:22:14.306 "reset": true, 00:22:14.306 "nvme_admin": false, 00:22:14.306 "nvme_io": false, 00:22:14.306 "nvme_io_md": false, 00:22:14.306 "write_zeroes": true, 00:22:14.306 "zcopy": false, 00:22:14.306 "get_zone_info": false, 00:22:14.306 "zone_management": false, 00:22:14.306 "zone_append": false, 00:22:14.306 "compare": false, 00:22:14.306 "compare_and_write": false, 00:22:14.306 "abort": false, 00:22:14.306 "seek_hole": false, 00:22:14.306 "seek_data": false, 00:22:14.306 "copy": false, 00:22:14.306 "nvme_iov_md": false 00:22:14.306 }, 00:22:14.306 "memory_domains": [ 00:22:14.306 { 00:22:14.306 "dma_device_id": "system", 00:22:14.306 "dma_device_type": 1 00:22:14.306 }, 00:22:14.306 { 00:22:14.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.306 "dma_device_type": 2 00:22:14.306 }, 00:22:14.306 { 00:22:14.306 "dma_device_id": "system", 00:22:14.306 "dma_device_type": 1 00:22:14.306 }, 00:22:14.306 { 00:22:14.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.306 "dma_device_type": 2 00:22:14.306 } 00:22:14.306 ], 00:22:14.306 "driver_specific": { 00:22:14.306 "raid": { 00:22:14.306 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:14.306 "strip_size_kb": 0, 00:22:14.306 "state": "online", 00:22:14.306 "raid_level": "raid1", 00:22:14.306 "superblock": true, 00:22:14.306 "num_base_bdevs": 2, 00:22:14.306 "num_base_bdevs_discovered": 2, 00:22:14.306 "num_base_bdevs_operational": 2, 00:22:14.306 "base_bdevs_list": [ 00:22:14.306 { 00:22:14.306 "name": "pt1", 00:22:14.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.306 "is_configured": true, 00:22:14.306 "data_offset": 256, 00:22:14.306 "data_size": 7936 00:22:14.306 }, 00:22:14.306 { 00:22:14.306 "name": "pt2", 00:22:14.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.306 "is_configured": true, 00:22:14.306 "data_offset": 256, 00:22:14.306 "data_size": 7936 00:22:14.306 } 00:22:14.306 ] 00:22:14.306 } 00:22:14.306 } 00:22:14.306 }' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:14.306 pt2' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.306 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.565 [2024-12-05 19:42:07.774640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1f168ae7-cc4a-4278-a08d-7a7f666a1326 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 1f168ae7-cc4a-4278-a08d-7a7f666a1326 ']' 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.565 [2024-12-05 19:42:07.826258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.565 [2024-12-05 19:42:07.826296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.565 [2024-12-05 19:42:07.826424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.565 [2024-12-05 19:42:07.826506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.565 [2024-12-05 19:42:07.826526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:14.565 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 [2024-12-05 19:42:07.970324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:14.566 [2024-12-05 19:42:07.972856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:14.566 [2024-12-05 19:42:07.972965] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:14.566 [2024-12-05 19:42:07.973048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:14.566 [2024-12-05 19:42:07.973074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.566 [2024-12-05 19:42:07.973090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:14.566 request: 00:22:14.566 { 00:22:14.566 "name": "raid_bdev1", 00:22:14.566 "raid_level": "raid1", 00:22:14.566 "base_bdevs": [ 00:22:14.566 "malloc1", 00:22:14.566 "malloc2" 00:22:14.566 ], 00:22:14.566 "superblock": false, 00:22:14.566 "method": "bdev_raid_create", 00:22:14.566 "req_id": 1 00:22:14.566 } 00:22:14.566 Got JSON-RPC error response 00:22:14.566 response: 00:22:14.566 { 00:22:14.566 "code": -17, 00:22:14.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:14.566 } 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.566 19:42:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.826 [2024-12-05 19:42:08.038334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.826 [2024-12-05 19:42:08.038622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.826 [2024-12-05 19:42:08.038695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:14.826 [2024-12-05 19:42:08.038962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.826 [2024-12-05 19:42:08.041629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.826 [2024-12-05 19:42:08.041807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.826 [2024-12-05 19:42:08.041988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:14.826 [2024-12-05 19:42:08.042174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:14.826 pt1 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.826 "name": "raid_bdev1", 00:22:14.826 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:14.826 "strip_size_kb": 0, 00:22:14.826 "state": "configuring", 00:22:14.826 "raid_level": "raid1", 00:22:14.826 "superblock": true, 00:22:14.826 "num_base_bdevs": 2, 00:22:14.826 "num_base_bdevs_discovered": 1, 00:22:14.826 "num_base_bdevs_operational": 2, 00:22:14.826 "base_bdevs_list": [ 00:22:14.826 { 00:22:14.826 "name": "pt1", 00:22:14.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.826 "is_configured": true, 00:22:14.826 "data_offset": 256, 00:22:14.826 "data_size": 7936 00:22:14.826 }, 00:22:14.826 { 00:22:14.826 "name": null, 00:22:14.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.826 "is_configured": false, 00:22:14.826 "data_offset": 256, 00:22:14.826 "data_size": 7936 00:22:14.826 } 00:22:14.826 ] 00:22:14.826 }' 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.826 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 [2024-12-05 19:42:08.562553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.394 [2024-12-05 19:42:08.562653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.394 [2024-12-05 19:42:08.562687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:15.394 [2024-12-05 19:42:08.562718] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.394 [2024-12-05 19:42:08.563029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.394 [2024-12-05 19:42:08.563066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.394 [2024-12-05 19:42:08.563135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.394 [2024-12-05 19:42:08.563171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.394 [2024-12-05 19:42:08.563313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:15.394 [2024-12-05 19:42:08.563333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:15.394 [2024-12-05 19:42:08.563426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:15.394 [2024-12-05 19:42:08.563572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:15.394 [2024-12-05 19:42:08.563586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:15.394 [2024-12-05 19:42:08.563726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.394 pt2 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.394 "name": "raid_bdev1", 00:22:15.394 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:15.394 "strip_size_kb": 0, 00:22:15.394 "state": "online", 00:22:15.394 "raid_level": "raid1", 00:22:15.394 "superblock": true, 00:22:15.394 "num_base_bdevs": 2, 00:22:15.394 "num_base_bdevs_discovered": 2, 00:22:15.394 "num_base_bdevs_operational": 2, 00:22:15.394 "base_bdevs_list": [ 00:22:15.394 { 00:22:15.394 "name": "pt1", 00:22:15.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.394 "is_configured": true, 00:22:15.394 "data_offset": 256, 00:22:15.394 "data_size": 7936 00:22:15.394 }, 00:22:15.394 { 00:22:15.394 "name": "pt2", 00:22:15.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.394 "is_configured": true, 00:22:15.394 "data_offset": 256, 00:22:15.394 "data_size": 7936 00:22:15.394 } 00:22:15.394 ] 00:22:15.394 }' 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.394 19:42:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.652 [2024-12-05 19:42:09.063047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.652 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.910 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.910 "name": "raid_bdev1", 00:22:15.910 "aliases": [ 00:22:15.910 "1f168ae7-cc4a-4278-a08d-7a7f666a1326" 00:22:15.910 ], 00:22:15.910 "product_name": "Raid Volume", 00:22:15.910 "block_size": 4096, 00:22:15.910 "num_blocks": 7936, 00:22:15.910 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:15.910 "md_size": 32, 00:22:15.910 "md_interleave": false, 00:22:15.910 "dif_type": 0, 00:22:15.910 "assigned_rate_limits": { 00:22:15.911 "rw_ios_per_sec": 0, 00:22:15.911 "rw_mbytes_per_sec": 0, 00:22:15.911 "r_mbytes_per_sec": 0, 00:22:15.911 "w_mbytes_per_sec": 0 00:22:15.911 }, 00:22:15.911 "claimed": false, 00:22:15.911 "zoned": false, 00:22:15.911 "supported_io_types": { 00:22:15.911 "read": true, 00:22:15.911 "write": true, 00:22:15.911 "unmap": false, 00:22:15.911 "flush": false, 00:22:15.911 "reset": true, 00:22:15.911 "nvme_admin": false, 00:22:15.911 "nvme_io": false, 00:22:15.911 "nvme_io_md": false, 00:22:15.911 "write_zeroes": true, 00:22:15.911 "zcopy": false, 00:22:15.911 "get_zone_info": false, 00:22:15.911 "zone_management": false, 00:22:15.911 "zone_append": false, 00:22:15.911 "compare": false, 00:22:15.911 "compare_and_write": false, 00:22:15.911 "abort": false, 00:22:15.911 "seek_hole": false, 00:22:15.911 "seek_data": false, 00:22:15.911 "copy": false, 00:22:15.911 "nvme_iov_md": false 00:22:15.911 }, 00:22:15.911 "memory_domains": [ 00:22:15.911 { 00:22:15.911 "dma_device_id": "system", 00:22:15.911 "dma_device_type": 1 00:22:15.911 }, 00:22:15.911 { 00:22:15.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.911 "dma_device_type": 2 00:22:15.911 }, 00:22:15.911 { 00:22:15.911 "dma_device_id": "system", 00:22:15.911 "dma_device_type": 1 00:22:15.911 }, 00:22:15.911 { 00:22:15.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.911 "dma_device_type": 2 00:22:15.911 } 00:22:15.911 ], 00:22:15.911 "driver_specific": { 00:22:15.911 "raid": { 00:22:15.911 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:15.911 "strip_size_kb": 0, 00:22:15.911 "state": "online", 00:22:15.911 "raid_level": "raid1", 00:22:15.911 "superblock": true, 00:22:15.911 "num_base_bdevs": 2, 00:22:15.911 "num_base_bdevs_discovered": 2, 00:22:15.911 "num_base_bdevs_operational": 2, 00:22:15.911 "base_bdevs_list": [ 00:22:15.911 { 00:22:15.911 "name": "pt1", 00:22:15.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.911 "is_configured": true, 00:22:15.911 "data_offset": 256, 00:22:15.911 "data_size": 7936 00:22:15.911 }, 00:22:15.911 { 00:22:15.911 "name": "pt2", 00:22:15.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.911 "is_configured": true, 00:22:15.911 "data_offset": 256, 00:22:15.911 "data_size": 7936 00:22:15.911 } 00:22:15.911 ] 00:22:15.911 } 00:22:15.911 } 00:22:15.911 }' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:15.911 pt2' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.911 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:15.911 [2024-12-05 19:42:09.339137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 1f168ae7-cc4a-4278-a08d-7a7f666a1326 '!=' 1f168ae7-cc4a-4278-a08d-7a7f666a1326 ']' 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.168 [2024-12-05 19:42:09.390851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.168 "name": "raid_bdev1", 00:22:16.168 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:16.168 "strip_size_kb": 0, 00:22:16.168 "state": "online", 00:22:16.168 "raid_level": "raid1", 00:22:16.168 "superblock": true, 00:22:16.168 "num_base_bdevs": 2, 00:22:16.168 "num_base_bdevs_discovered": 1, 00:22:16.168 "num_base_bdevs_operational": 1, 00:22:16.168 "base_bdevs_list": [ 00:22:16.168 { 00:22:16.168 "name": null, 00:22:16.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.168 "is_configured": false, 00:22:16.168 "data_offset": 0, 00:22:16.168 "data_size": 7936 00:22:16.168 }, 00:22:16.168 { 00:22:16.168 "name": "pt2", 00:22:16.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.168 "is_configured": true, 00:22:16.168 "data_offset": 256, 00:22:16.168 "data_size": 7936 00:22:16.168 } 00:22:16.168 ] 00:22:16.168 }' 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.168 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.735 [2024-12-05 19:42:09.930964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.735 [2024-12-05 19:42:09.931133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.735 [2024-12-05 19:42:09.931349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.735 [2024-12-05 19:42:09.931428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.735 [2024-12-05 19:42:09.931449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.735 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.736 19:42:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.736 [2024-12-05 19:42:10.006991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.736 [2024-12-05 19:42:10.007071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.736 [2024-12-05 19:42:10.007098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:16.736 [2024-12-05 19:42:10.007114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.736 [2024-12-05 19:42:10.009747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.736 [2024-12-05 19:42:10.009925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.736 [2024-12-05 19:42:10.010011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:16.736 [2024-12-05 19:42:10.010078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.736 [2024-12-05 19:42:10.010206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:16.736 [2024-12-05 19:42:10.010228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:16.736 [2024-12-05 19:42:10.010319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:16.736 [2024-12-05 19:42:10.010464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:16.736 [2024-12-05 19:42:10.010478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:16.736 [2024-12-05 19:42:10.010602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.736 pt2 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.736 "name": "raid_bdev1", 00:22:16.736 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:16.736 "strip_size_kb": 0, 00:22:16.736 "state": "online", 00:22:16.736 "raid_level": "raid1", 00:22:16.736 "superblock": true, 00:22:16.736 "num_base_bdevs": 2, 00:22:16.736 "num_base_bdevs_discovered": 1, 00:22:16.736 "num_base_bdevs_operational": 1, 00:22:16.736 "base_bdevs_list": [ 00:22:16.736 { 00:22:16.736 "name": null, 00:22:16.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.736 "is_configured": false, 00:22:16.736 "data_offset": 256, 00:22:16.736 "data_size": 7936 00:22:16.736 }, 00:22:16.736 { 00:22:16.736 "name": "pt2", 00:22:16.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.736 "is_configured": true, 00:22:16.736 "data_offset": 256, 00:22:16.736 "data_size": 7936 00:22:16.736 } 00:22:16.736 ] 00:22:16.736 }' 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.736 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 [2024-12-05 19:42:10.511081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.310 [2024-12-05 19:42:10.511121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.310 [2024-12-05 19:42:10.511214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.310 [2024-12-05 19:42:10.511287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.310 [2024-12-05 19:42:10.511302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.310 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 [2024-12-05 19:42:10.575147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.310 [2024-12-05 19:42:10.575361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.310 [2024-12-05 19:42:10.575535] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:17.310 [2024-12-05 19:42:10.575646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.310 [2024-12-05 19:42:10.578230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.310 [2024-12-05 19:42:10.578384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.310 [2024-12-05 19:42:10.578567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.311 [2024-12-05 19:42:10.578738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.311 [2024-12-05 19:42:10.579021] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:17.311 [2024-12-05 19:42:10.579046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.311 [2024-12-05 19:42:10.579074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:17.311 [2024-12-05 19:42:10.579158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.311 [2024-12-05 19:42:10.579323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:17.311 [2024-12-05 19:42:10.579340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:17.311 [2024-12-05 19:42:10.579419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:17.311 [2024-12-05 19:42:10.579557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:17.311 [2024-12-05 19:42:10.579575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:17.311 pt1 00:22:17.311 [2024-12-05 19:42:10.579722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.311 "name": "raid_bdev1", 00:22:17.311 "uuid": "1f168ae7-cc4a-4278-a08d-7a7f666a1326", 00:22:17.311 "strip_size_kb": 0, 00:22:17.311 "state": "online", 00:22:17.311 "raid_level": "raid1", 00:22:17.311 "superblock": true, 00:22:17.311 "num_base_bdevs": 2, 00:22:17.311 "num_base_bdevs_discovered": 1, 00:22:17.311 "num_base_bdevs_operational": 1, 00:22:17.311 "base_bdevs_list": [ 00:22:17.311 { 00:22:17.311 "name": null, 00:22:17.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.311 "is_configured": false, 00:22:17.311 "data_offset": 256, 00:22:17.311 "data_size": 7936 00:22:17.311 }, 00:22:17.311 { 00:22:17.311 "name": "pt2", 00:22:17.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.311 "is_configured": true, 00:22:17.311 "data_offset": 256, 00:22:17.311 "data_size": 7936 00:22:17.311 } 00:22:17.311 ] 00:22:17.311 }' 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.311 19:42:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.877 [2024-12-05 19:42:11.203623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 1f168ae7-cc4a-4278-a08d-7a7f666a1326 '!=' 1f168ae7-cc4a-4278-a08d-7a7f666a1326 ']' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87905 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87905 ']' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87905 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87905 00:22:17.877 killing process with pid 87905 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87905' 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87905 00:22:17.877 [2024-12-05 19:42:11.288723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.877 [2024-12-05 19:42:11.288835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.877 19:42:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87905 00:22:17.877 [2024-12-05 19:42:11.288902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.877 [2024-12-05 19:42:11.288928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:18.134 [2024-12-05 19:42:11.489559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.507 ************************************ 00:22:19.507 END TEST raid_superblock_test_md_separate 00:22:19.507 ************************************ 00:22:19.507 19:42:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:19.507 00:22:19.507 real 0m6.724s 00:22:19.507 user 0m10.654s 00:22:19.507 sys 0m0.948s 00:22:19.507 19:42:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.507 19:42:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 19:42:12 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:22:19.507 19:42:12 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:19.507 19:42:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:19.507 19:42:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.507 19:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 ************************************ 00:22:19.507 START TEST raid_rebuild_test_sb_md_separate 00:22:19.507 ************************************ 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:19.507 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88235 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88235 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88235 ']' 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.508 19:42:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.508 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.508 Zero copy mechanism will not be used. 00:22:19.508 [2024-12-05 19:42:12.723033] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:19.508 [2024-12-05 19:42:12.723193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88235 ] 00:22:19.508 [2024-12-05 19:42:12.896263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.766 [2024-12-05 19:42:13.025174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.025 [2024-12-05 19:42:13.227834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.025 [2024-12-05 19:42:13.227888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.284 BaseBdev1_malloc 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.284 [2024-12-05 19:42:13.669333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.284 [2024-12-05 19:42:13.669415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.284 [2024-12-05 19:42:13.669451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:20.284 [2024-12-05 19:42:13.669469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.284 [2024-12-05 19:42:13.672079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.284 [2024-12-05 19:42:13.672271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.284 BaseBdev1 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.284 BaseBdev2_malloc 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.284 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.557 [2024-12-05 19:42:13.726102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.557 [2024-12-05 19:42:13.726185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.557 [2024-12-05 19:42:13.726216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:20.557 [2024-12-05 19:42:13.726234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.557 [2024-12-05 19:42:13.729027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.557 [2024-12-05 19:42:13.729078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.557 BaseBdev2 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.557 spare_malloc 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.557 spare_delay 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.557 [2024-12-05 19:42:13.808494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.557 [2024-12-05 19:42:13.808745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.557 [2024-12-05 19:42:13.808791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:20.557 [2024-12-05 19:42:13.808811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.557 [2024-12-05 19:42:13.811376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.557 [2024-12-05 19:42:13.811446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.557 spare 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.557 [2024-12-05 19:42:13.820559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.557 [2024-12-05 19:42:13.822965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.557 [2024-12-05 19:42:13.823337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:20.557 [2024-12-05 19:42:13.823367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:20.557 [2024-12-05 19:42:13.823477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:20.557 [2024-12-05 19:42:13.823646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:20.557 [2024-12-05 19:42:13.823663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:20.557 [2024-12-05 19:42:13.823821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:20.557 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.558 "name": "raid_bdev1", 00:22:20.558 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:20.558 "strip_size_kb": 0, 00:22:20.558 "state": "online", 00:22:20.558 "raid_level": "raid1", 00:22:20.558 "superblock": true, 00:22:20.558 "num_base_bdevs": 2, 00:22:20.558 "num_base_bdevs_discovered": 2, 00:22:20.558 "num_base_bdevs_operational": 2, 00:22:20.558 "base_bdevs_list": [ 00:22:20.558 { 00:22:20.558 "name": "BaseBdev1", 00:22:20.558 "uuid": "b95fe46b-b3d9-5454-98fc-a10ebac82084", 00:22:20.558 "is_configured": true, 00:22:20.558 "data_offset": 256, 00:22:20.558 "data_size": 7936 00:22:20.558 }, 00:22:20.558 { 00:22:20.558 "name": "BaseBdev2", 00:22:20.558 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:20.558 "is_configured": true, 00:22:20.558 "data_offset": 256, 00:22:20.558 "data_size": 7936 00:22:20.558 } 00:22:20.558 ] 00:22:20.558 }' 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.558 19:42:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.134 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.134 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.134 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:21.135 [2024-12-05 19:42:14.341056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.135 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:21.394 [2024-12-05 19:42:14.728927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:21.394 /dev/nbd0 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.394 1+0 records in 00:22:21.394 1+0 records out 00:22:21.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362892 s, 11.3 MB/s 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:21.394 19:42:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:22.329 7936+0 records in 00:22:22.329 7936+0 records out 00:22:22.329 32505856 bytes (33 MB, 31 MiB) copied, 0.908236 s, 35.8 MB/s 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.329 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.587 [2024-12-05 19:42:15.989489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.587 19:42:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.587 [2024-12-05 19:42:16.005798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.587 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.845 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.845 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.845 "name": "raid_bdev1", 00:22:22.845 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:22.845 "strip_size_kb": 0, 00:22:22.845 "state": "online", 00:22:22.845 "raid_level": "raid1", 00:22:22.845 "superblock": true, 00:22:22.845 "num_base_bdevs": 2, 00:22:22.846 "num_base_bdevs_discovered": 1, 00:22:22.846 "num_base_bdevs_operational": 1, 00:22:22.846 "base_bdevs_list": [ 00:22:22.846 { 00:22:22.846 "name": null, 00:22:22.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.846 "is_configured": false, 00:22:22.846 "data_offset": 0, 00:22:22.846 "data_size": 7936 00:22:22.846 }, 00:22:22.846 { 00:22:22.846 "name": "BaseBdev2", 00:22:22.846 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:22.846 "is_configured": true, 00:22:22.846 "data_offset": 256, 00:22:22.846 "data_size": 7936 00:22:22.846 } 00:22:22.846 ] 00:22:22.846 }' 00:22:22.846 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.846 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.104 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.104 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.104 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.104 [2024-12-05 19:42:16.485919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.104 [2024-12-05 19:42:16.499788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:23.104 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.104 19:42:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:23.104 [2024-12-05 19:42:16.502413] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.482 "name": "raid_bdev1", 00:22:24.482 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:24.482 "strip_size_kb": 0, 00:22:24.482 "state": "online", 00:22:24.482 "raid_level": "raid1", 00:22:24.482 "superblock": true, 00:22:24.482 "num_base_bdevs": 2, 00:22:24.482 "num_base_bdevs_discovered": 2, 00:22:24.482 "num_base_bdevs_operational": 2, 00:22:24.482 "process": { 00:22:24.482 "type": "rebuild", 00:22:24.482 "target": "spare", 00:22:24.482 "progress": { 00:22:24.482 "blocks": 2560, 00:22:24.482 "percent": 32 00:22:24.482 } 00:22:24.482 }, 00:22:24.482 "base_bdevs_list": [ 00:22:24.482 { 00:22:24.482 "name": "spare", 00:22:24.482 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:24.482 "is_configured": true, 00:22:24.482 "data_offset": 256, 00:22:24.482 "data_size": 7936 00:22:24.482 }, 00:22:24.482 { 00:22:24.482 "name": "BaseBdev2", 00:22:24.482 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:24.482 "is_configured": true, 00:22:24.482 "data_offset": 256, 00:22:24.482 "data_size": 7936 00:22:24.482 } 00:22:24.482 ] 00:22:24.482 }' 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.482 [2024-12-05 19:42:17.671803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.482 [2024-12-05 19:42:17.711820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.482 [2024-12-05 19:42:17.711927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.482 [2024-12-05 19:42:17.711962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.482 [2024-12-05 19:42:17.711984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.482 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.483 "name": "raid_bdev1", 00:22:24.483 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:24.483 "strip_size_kb": 0, 00:22:24.483 "state": "online", 00:22:24.483 "raid_level": "raid1", 00:22:24.483 "superblock": true, 00:22:24.483 "num_base_bdevs": 2, 00:22:24.483 "num_base_bdevs_discovered": 1, 00:22:24.483 "num_base_bdevs_operational": 1, 00:22:24.483 "base_bdevs_list": [ 00:22:24.483 { 00:22:24.483 "name": null, 00:22:24.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.483 "is_configured": false, 00:22:24.483 "data_offset": 0, 00:22:24.483 "data_size": 7936 00:22:24.483 }, 00:22:24.483 { 00:22:24.483 "name": "BaseBdev2", 00:22:24.483 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:24.483 "is_configured": true, 00:22:24.483 "data_offset": 256, 00:22:24.483 "data_size": 7936 00:22:24.483 } 00:22:24.483 ] 00:22:24.483 }' 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.483 19:42:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.051 "name": "raid_bdev1", 00:22:25.051 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:25.051 "strip_size_kb": 0, 00:22:25.051 "state": "online", 00:22:25.051 "raid_level": "raid1", 00:22:25.051 "superblock": true, 00:22:25.051 "num_base_bdevs": 2, 00:22:25.051 "num_base_bdevs_discovered": 1, 00:22:25.051 "num_base_bdevs_operational": 1, 00:22:25.051 "base_bdevs_list": [ 00:22:25.051 { 00:22:25.051 "name": null, 00:22:25.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.051 "is_configured": false, 00:22:25.051 "data_offset": 0, 00:22:25.051 "data_size": 7936 00:22:25.051 }, 00:22:25.051 { 00:22:25.051 "name": "BaseBdev2", 00:22:25.051 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:25.051 "is_configured": true, 00:22:25.051 "data_offset": 256, 00:22:25.051 "data_size": 7936 00:22:25.051 } 00:22:25.051 ] 00:22:25.051 }' 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.051 [2024-12-05 19:42:18.406321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.051 [2024-12-05 19:42:18.419280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.051 19:42:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:25.051 [2024-12-05 19:42:18.421840] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.995 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.271 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.271 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.271 "name": "raid_bdev1", 00:22:26.271 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:26.271 "strip_size_kb": 0, 00:22:26.271 "state": "online", 00:22:26.271 "raid_level": "raid1", 00:22:26.271 "superblock": true, 00:22:26.271 "num_base_bdevs": 2, 00:22:26.271 "num_base_bdevs_discovered": 2, 00:22:26.271 "num_base_bdevs_operational": 2, 00:22:26.271 "process": { 00:22:26.271 "type": "rebuild", 00:22:26.272 "target": "spare", 00:22:26.272 "progress": { 00:22:26.272 "blocks": 2560, 00:22:26.272 "percent": 32 00:22:26.272 } 00:22:26.272 }, 00:22:26.272 "base_bdevs_list": [ 00:22:26.272 { 00:22:26.272 "name": "spare", 00:22:26.272 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 256, 00:22:26.272 "data_size": 7936 00:22:26.272 }, 00:22:26.272 { 00:22:26.272 "name": "BaseBdev2", 00:22:26.272 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 256, 00:22:26.272 "data_size": 7936 00:22:26.272 } 00:22:26.272 ] 00:22:26.272 }' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:26.272 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=773 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.272 "name": "raid_bdev1", 00:22:26.272 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:26.272 "strip_size_kb": 0, 00:22:26.272 "state": "online", 00:22:26.272 "raid_level": "raid1", 00:22:26.272 "superblock": true, 00:22:26.272 "num_base_bdevs": 2, 00:22:26.272 "num_base_bdevs_discovered": 2, 00:22:26.272 "num_base_bdevs_operational": 2, 00:22:26.272 "process": { 00:22:26.272 "type": "rebuild", 00:22:26.272 "target": "spare", 00:22:26.272 "progress": { 00:22:26.272 "blocks": 2816, 00:22:26.272 "percent": 35 00:22:26.272 } 00:22:26.272 }, 00:22:26.272 "base_bdevs_list": [ 00:22:26.272 { 00:22:26.272 "name": "spare", 00:22:26.272 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 256, 00:22:26.272 "data_size": 7936 00:22:26.272 }, 00:22:26.272 { 00:22:26.272 "name": "BaseBdev2", 00:22:26.272 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:26.272 "is_configured": true, 00:22:26.272 "data_offset": 256, 00:22:26.272 "data_size": 7936 00:22:26.272 } 00:22:26.272 ] 00:22:26.272 }' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.272 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.531 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.531 19:42:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.468 "name": "raid_bdev1", 00:22:27.468 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:27.468 "strip_size_kb": 0, 00:22:27.468 "state": "online", 00:22:27.468 "raid_level": "raid1", 00:22:27.468 "superblock": true, 00:22:27.468 "num_base_bdevs": 2, 00:22:27.468 "num_base_bdevs_discovered": 2, 00:22:27.468 "num_base_bdevs_operational": 2, 00:22:27.468 "process": { 00:22:27.468 "type": "rebuild", 00:22:27.468 "target": "spare", 00:22:27.468 "progress": { 00:22:27.468 "blocks": 5888, 00:22:27.468 "percent": 74 00:22:27.468 } 00:22:27.468 }, 00:22:27.468 "base_bdevs_list": [ 00:22:27.468 { 00:22:27.468 "name": "spare", 00:22:27.468 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:27.468 "is_configured": true, 00:22:27.468 "data_offset": 256, 00:22:27.468 "data_size": 7936 00:22:27.468 }, 00:22:27.468 { 00:22:27.468 "name": "BaseBdev2", 00:22:27.468 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:27.468 "is_configured": true, 00:22:27.468 "data_offset": 256, 00:22:27.468 "data_size": 7936 00:22:27.468 } 00:22:27.468 ] 00:22:27.468 }' 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.468 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.727 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.727 19:42:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:28.293 [2024-12-05 19:42:21.546051] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:28.293 [2024-12-05 19:42:21.546168] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:28.293 [2024-12-05 19:42:21.546342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.552 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:28.552 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.552 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.553 "name": "raid_bdev1", 00:22:28.553 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:28.553 "strip_size_kb": 0, 00:22:28.553 "state": "online", 00:22:28.553 "raid_level": "raid1", 00:22:28.553 "superblock": true, 00:22:28.553 "num_base_bdevs": 2, 00:22:28.553 "num_base_bdevs_discovered": 2, 00:22:28.553 "num_base_bdevs_operational": 2, 00:22:28.553 "base_bdevs_list": [ 00:22:28.553 { 00:22:28.553 "name": "spare", 00:22:28.553 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:28.553 "is_configured": true, 00:22:28.553 "data_offset": 256, 00:22:28.553 "data_size": 7936 00:22:28.553 }, 00:22:28.553 { 00:22:28.553 "name": "BaseBdev2", 00:22:28.553 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:28.553 "is_configured": true, 00:22:28.553 "data_offset": 256, 00:22:28.553 "data_size": 7936 00:22:28.553 } 00:22:28.553 ] 00:22:28.553 }' 00:22:28.553 19:42:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.812 "name": "raid_bdev1", 00:22:28.812 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:28.812 "strip_size_kb": 0, 00:22:28.812 "state": "online", 00:22:28.812 "raid_level": "raid1", 00:22:28.812 "superblock": true, 00:22:28.812 "num_base_bdevs": 2, 00:22:28.812 "num_base_bdevs_discovered": 2, 00:22:28.812 "num_base_bdevs_operational": 2, 00:22:28.812 "base_bdevs_list": [ 00:22:28.812 { 00:22:28.812 "name": "spare", 00:22:28.812 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:28.812 "is_configured": true, 00:22:28.812 "data_offset": 256, 00:22:28.812 "data_size": 7936 00:22:28.812 }, 00:22:28.812 { 00:22:28.812 "name": "BaseBdev2", 00:22:28.812 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:28.812 "is_configured": true, 00:22:28.812 "data_offset": 256, 00:22:28.812 "data_size": 7936 00:22:28.812 } 00:22:28.812 ] 00:22:28.812 }' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.812 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.071 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.071 "name": "raid_bdev1", 00:22:29.071 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:29.071 "strip_size_kb": 0, 00:22:29.071 "state": "online", 00:22:29.071 "raid_level": "raid1", 00:22:29.071 "superblock": true, 00:22:29.071 "num_base_bdevs": 2, 00:22:29.071 "num_base_bdevs_discovered": 2, 00:22:29.071 "num_base_bdevs_operational": 2, 00:22:29.071 "base_bdevs_list": [ 00:22:29.071 { 00:22:29.071 "name": "spare", 00:22:29.071 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:29.071 "is_configured": true, 00:22:29.071 "data_offset": 256, 00:22:29.071 "data_size": 7936 00:22:29.071 }, 00:22:29.071 { 00:22:29.071 "name": "BaseBdev2", 00:22:29.071 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:29.071 "is_configured": true, 00:22:29.071 "data_offset": 256, 00:22:29.071 "data_size": 7936 00:22:29.071 } 00:22:29.071 ] 00:22:29.071 }' 00:22:29.071 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.071 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.330 [2024-12-05 19:42:22.692908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.330 [2024-12-05 19:42:22.693092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.330 [2024-12-05 19:42:22.693253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.330 [2024-12-05 19:42:22.693348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.330 [2024-12-05 19:42:22.693365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:29.330 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.331 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.331 19:42:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:29.898 /dev/nbd0 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.898 1+0 records in 00:22:29.898 1+0 records out 00:22:29.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459565 s, 8.9 MB/s 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.898 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:30.157 /dev/nbd1 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:30.157 1+0 records in 00:22:30.157 1+0 records out 00:22:30.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054115 s, 7.6 MB/s 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:30.157 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.416 19:42:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.675 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.244 [2024-12-05 19:42:24.398713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:31.244 [2024-12-05 19:42:24.398806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.244 [2024-12-05 19:42:24.398841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:31.244 [2024-12-05 19:42:24.398856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.244 [2024-12-05 19:42:24.401820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.244 [2024-12-05 19:42:24.401865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:31.244 [2024-12-05 19:42:24.401957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:31.244 [2024-12-05 19:42:24.402026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.244 [2024-12-05 19:42:24.402245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.244 spare 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.244 [2024-12-05 19:42:24.502399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:31.244 [2024-12-05 19:42:24.502484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:31.244 [2024-12-05 19:42:24.502682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:31.244 [2024-12-05 19:42:24.502949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:31.244 [2024-12-05 19:42:24.502980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:31.244 [2024-12-05 19:42:24.503228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.244 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.244 "name": "raid_bdev1", 00:22:31.244 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:31.244 "strip_size_kb": 0, 00:22:31.244 "state": "online", 00:22:31.244 "raid_level": "raid1", 00:22:31.244 "superblock": true, 00:22:31.244 "num_base_bdevs": 2, 00:22:31.244 "num_base_bdevs_discovered": 2, 00:22:31.244 "num_base_bdevs_operational": 2, 00:22:31.244 "base_bdevs_list": [ 00:22:31.244 { 00:22:31.244 "name": "spare", 00:22:31.244 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:31.245 "is_configured": true, 00:22:31.245 "data_offset": 256, 00:22:31.245 "data_size": 7936 00:22:31.245 }, 00:22:31.245 { 00:22:31.245 "name": "BaseBdev2", 00:22:31.245 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:31.245 "is_configured": true, 00:22:31.245 "data_offset": 256, 00:22:31.245 "data_size": 7936 00:22:31.245 } 00:22:31.245 ] 00:22:31.245 }' 00:22:31.245 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.245 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.820 19:42:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.820 "name": "raid_bdev1", 00:22:31.820 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:31.820 "strip_size_kb": 0, 00:22:31.820 "state": "online", 00:22:31.820 "raid_level": "raid1", 00:22:31.820 "superblock": true, 00:22:31.820 "num_base_bdevs": 2, 00:22:31.820 "num_base_bdevs_discovered": 2, 00:22:31.820 "num_base_bdevs_operational": 2, 00:22:31.820 "base_bdevs_list": [ 00:22:31.820 { 00:22:31.820 "name": "spare", 00:22:31.820 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:31.820 "is_configured": true, 00:22:31.820 "data_offset": 256, 00:22:31.820 "data_size": 7936 00:22:31.820 }, 00:22:31.820 { 00:22:31.820 "name": "BaseBdev2", 00:22:31.820 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:31.820 "is_configured": true, 00:22:31.820 "data_offset": 256, 00:22:31.820 "data_size": 7936 00:22:31.820 } 00:22:31.820 ] 00:22:31.820 }' 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.820 [2024-12-05 19:42:25.163387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.820 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.820 "name": "raid_bdev1", 00:22:31.820 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:31.820 "strip_size_kb": 0, 00:22:31.820 "state": "online", 00:22:31.821 "raid_level": "raid1", 00:22:31.821 "superblock": true, 00:22:31.821 "num_base_bdevs": 2, 00:22:31.821 "num_base_bdevs_discovered": 1, 00:22:31.821 "num_base_bdevs_operational": 1, 00:22:31.821 "base_bdevs_list": [ 00:22:31.821 { 00:22:31.821 "name": null, 00:22:31.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.821 "is_configured": false, 00:22:31.821 "data_offset": 0, 00:22:31.821 "data_size": 7936 00:22:31.821 }, 00:22:31.821 { 00:22:31.821 "name": "BaseBdev2", 00:22:31.821 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:31.821 "is_configured": true, 00:22:31.821 "data_offset": 256, 00:22:31.821 "data_size": 7936 00:22:31.821 } 00:22:31.821 ] 00:22:31.821 }' 00:22:31.821 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.821 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.396 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:32.397 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.397 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.397 [2024-12-05 19:42:25.635632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.397 [2024-12-05 19:42:25.635943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:32.397 [2024-12-05 19:42:25.636004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:32.397 [2024-12-05 19:42:25.636076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.397 [2024-12-05 19:42:25.648851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:32.397 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.397 19:42:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:32.397 [2024-12-05 19:42:25.651514] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.333 "name": "raid_bdev1", 00:22:33.333 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:33.333 "strip_size_kb": 0, 00:22:33.333 "state": "online", 00:22:33.333 "raid_level": "raid1", 00:22:33.333 "superblock": true, 00:22:33.333 "num_base_bdevs": 2, 00:22:33.333 "num_base_bdevs_discovered": 2, 00:22:33.333 "num_base_bdevs_operational": 2, 00:22:33.333 "process": { 00:22:33.333 "type": "rebuild", 00:22:33.333 "target": "spare", 00:22:33.333 "progress": { 00:22:33.333 "blocks": 2560, 00:22:33.333 "percent": 32 00:22:33.333 } 00:22:33.333 }, 00:22:33.333 "base_bdevs_list": [ 00:22:33.333 { 00:22:33.333 "name": "spare", 00:22:33.333 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:33.333 "is_configured": true, 00:22:33.333 "data_offset": 256, 00:22:33.333 "data_size": 7936 00:22:33.333 }, 00:22:33.333 { 00:22:33.333 "name": "BaseBdev2", 00:22:33.333 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:33.333 "is_configured": true, 00:22:33.333 "data_offset": 256, 00:22:33.333 "data_size": 7936 00:22:33.333 } 00:22:33.333 ] 00:22:33.333 }' 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.333 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.592 [2024-12-05 19:42:26.818493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.592 [2024-12-05 19:42:26.862256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:33.592 [2024-12-05 19:42:26.862388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.592 [2024-12-05 19:42:26.862415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.592 [2024-12-05 19:42:26.862445] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.592 "name": "raid_bdev1", 00:22:33.592 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:33.592 "strip_size_kb": 0, 00:22:33.592 "state": "online", 00:22:33.592 "raid_level": "raid1", 00:22:33.592 "superblock": true, 00:22:33.592 "num_base_bdevs": 2, 00:22:33.592 "num_base_bdevs_discovered": 1, 00:22:33.592 "num_base_bdevs_operational": 1, 00:22:33.592 "base_bdevs_list": [ 00:22:33.592 { 00:22:33.592 "name": null, 00:22:33.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.592 "is_configured": false, 00:22:33.592 "data_offset": 0, 00:22:33.592 "data_size": 7936 00:22:33.592 }, 00:22:33.592 { 00:22:33.592 "name": "BaseBdev2", 00:22:33.592 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:33.592 "is_configured": true, 00:22:33.592 "data_offset": 256, 00:22:33.592 "data_size": 7936 00:22:33.592 } 00:22:33.592 ] 00:22:33.592 }' 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.592 19:42:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.158 19:42:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:34.158 19:42:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.158 19:42:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.158 [2024-12-05 19:42:27.393189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:34.158 [2024-12-05 19:42:27.393269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.158 [2024-12-05 19:42:27.393307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:34.158 [2024-12-05 19:42:27.393327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.158 [2024-12-05 19:42:27.393669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.158 [2024-12-05 19:42:27.393728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:34.158 [2024-12-05 19:42:27.393830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:34.158 [2024-12-05 19:42:27.393855] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:34.158 [2024-12-05 19:42:27.393870] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:34.158 [2024-12-05 19:42:27.393901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:34.158 [2024-12-05 19:42:27.407031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:34.158 spare 00:22:34.158 19:42:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.159 19:42:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:34.159 [2024-12-05 19:42:27.409839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.093 "name": "raid_bdev1", 00:22:35.093 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:35.093 "strip_size_kb": 0, 00:22:35.093 "state": "online", 00:22:35.093 "raid_level": "raid1", 00:22:35.093 "superblock": true, 00:22:35.093 "num_base_bdevs": 2, 00:22:35.093 "num_base_bdevs_discovered": 2, 00:22:35.093 "num_base_bdevs_operational": 2, 00:22:35.093 "process": { 00:22:35.093 "type": "rebuild", 00:22:35.093 "target": "spare", 00:22:35.093 "progress": { 00:22:35.093 "blocks": 2560, 00:22:35.093 "percent": 32 00:22:35.093 } 00:22:35.093 }, 00:22:35.093 "base_bdevs_list": [ 00:22:35.093 { 00:22:35.093 "name": "spare", 00:22:35.093 "uuid": "7fd22c97-e1a1-589c-86de-7e4bf5bf2f6a", 00:22:35.093 "is_configured": true, 00:22:35.093 "data_offset": 256, 00:22:35.093 "data_size": 7936 00:22:35.093 }, 00:22:35.093 { 00:22:35.093 "name": "BaseBdev2", 00:22:35.093 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:35.093 "is_configured": true, 00:22:35.093 "data_offset": 256, 00:22:35.093 "data_size": 7936 00:22:35.093 } 00:22:35.093 ] 00:22:35.093 }' 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.093 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.352 [2024-12-05 19:42:28.579908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.352 [2024-12-05 19:42:28.619859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:35.352 [2024-12-05 19:42:28.619968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.352 [2024-12-05 19:42:28.620016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.352 [2024-12-05 19:42:28.620028] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.352 "name": "raid_bdev1", 00:22:35.352 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:35.352 "strip_size_kb": 0, 00:22:35.352 "state": "online", 00:22:35.352 "raid_level": "raid1", 00:22:35.352 "superblock": true, 00:22:35.352 "num_base_bdevs": 2, 00:22:35.352 "num_base_bdevs_discovered": 1, 00:22:35.352 "num_base_bdevs_operational": 1, 00:22:35.352 "base_bdevs_list": [ 00:22:35.352 { 00:22:35.352 "name": null, 00:22:35.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.352 "is_configured": false, 00:22:35.352 "data_offset": 0, 00:22:35.352 "data_size": 7936 00:22:35.352 }, 00:22:35.352 { 00:22:35.352 "name": "BaseBdev2", 00:22:35.352 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:35.352 "is_configured": true, 00:22:35.352 "data_offset": 256, 00:22:35.352 "data_size": 7936 00:22:35.352 } 00:22:35.352 ] 00:22:35.352 }' 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.352 19:42:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.918 "name": "raid_bdev1", 00:22:35.918 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:35.918 "strip_size_kb": 0, 00:22:35.918 "state": "online", 00:22:35.918 "raid_level": "raid1", 00:22:35.918 "superblock": true, 00:22:35.918 "num_base_bdevs": 2, 00:22:35.918 "num_base_bdevs_discovered": 1, 00:22:35.918 "num_base_bdevs_operational": 1, 00:22:35.918 "base_bdevs_list": [ 00:22:35.918 { 00:22:35.918 "name": null, 00:22:35.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.918 "is_configured": false, 00:22:35.918 "data_offset": 0, 00:22:35.918 "data_size": 7936 00:22:35.918 }, 00:22:35.918 { 00:22:35.918 "name": "BaseBdev2", 00:22:35.918 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:35.918 "is_configured": true, 00:22:35.918 "data_offset": 256, 00:22:35.918 "data_size": 7936 00:22:35.918 } 00:22:35.918 ] 00:22:35.918 }' 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.918 [2024-12-05 19:42:29.342278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:35.918 [2024-12-05 19:42:29.342335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.918 [2024-12-05 19:42:29.342365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:35.918 [2024-12-05 19:42:29.342379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.918 [2024-12-05 19:42:29.342666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.918 [2024-12-05 19:42:29.342688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.918 [2024-12-05 19:42:29.342812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:35.918 [2024-12-05 19:42:29.342835] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:35.918 [2024-12-05 19:42:29.342849] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:35.918 [2024-12-05 19:42:29.342863] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:35.918 BaseBdev1 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.918 19:42:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.293 "name": "raid_bdev1", 00:22:37.293 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:37.293 "strip_size_kb": 0, 00:22:37.293 "state": "online", 00:22:37.293 "raid_level": "raid1", 00:22:37.293 "superblock": true, 00:22:37.293 "num_base_bdevs": 2, 00:22:37.293 "num_base_bdevs_discovered": 1, 00:22:37.293 "num_base_bdevs_operational": 1, 00:22:37.293 "base_bdevs_list": [ 00:22:37.293 { 00:22:37.293 "name": null, 00:22:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.293 "is_configured": false, 00:22:37.293 "data_offset": 0, 00:22:37.293 "data_size": 7936 00:22:37.293 }, 00:22:37.293 { 00:22:37.293 "name": "BaseBdev2", 00:22:37.293 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:37.293 "is_configured": true, 00:22:37.293 "data_offset": 256, 00:22:37.293 "data_size": 7936 00:22:37.293 } 00:22:37.293 ] 00:22:37.293 }' 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.293 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.552 "name": "raid_bdev1", 00:22:37.552 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:37.552 "strip_size_kb": 0, 00:22:37.552 "state": "online", 00:22:37.552 "raid_level": "raid1", 00:22:37.552 "superblock": true, 00:22:37.552 "num_base_bdevs": 2, 00:22:37.552 "num_base_bdevs_discovered": 1, 00:22:37.552 "num_base_bdevs_operational": 1, 00:22:37.552 "base_bdevs_list": [ 00:22:37.552 { 00:22:37.552 "name": null, 00:22:37.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.552 "is_configured": false, 00:22:37.552 "data_offset": 0, 00:22:37.552 "data_size": 7936 00:22:37.552 }, 00:22:37.552 { 00:22:37.552 "name": "BaseBdev2", 00:22:37.552 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:37.552 "is_configured": true, 00:22:37.552 "data_offset": 256, 00:22:37.552 "data_size": 7936 00:22:37.552 } 00:22:37.552 ] 00:22:37.552 }' 00:22:37.552 19:42:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.811 [2024-12-05 19:42:31.071148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.811 [2024-12-05 19:42:31.071389] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:37.811 [2024-12-05 19:42:31.071412] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:37.811 request: 00:22:37.811 { 00:22:37.811 "base_bdev": "BaseBdev1", 00:22:37.811 "raid_bdev": "raid_bdev1", 00:22:37.811 "method": "bdev_raid_add_base_bdev", 00:22:37.811 "req_id": 1 00:22:37.811 } 00:22:37.811 Got JSON-RPC error response 00:22:37.811 response: 00:22:37.811 { 00:22:37.811 "code": -22, 00:22:37.811 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:37.811 } 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.811 19:42:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.756 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.756 "name": "raid_bdev1", 00:22:38.756 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:38.756 "strip_size_kb": 0, 00:22:38.756 "state": "online", 00:22:38.756 "raid_level": "raid1", 00:22:38.756 "superblock": true, 00:22:38.756 "num_base_bdevs": 2, 00:22:38.756 "num_base_bdevs_discovered": 1, 00:22:38.756 "num_base_bdevs_operational": 1, 00:22:38.756 "base_bdevs_list": [ 00:22:38.756 { 00:22:38.756 "name": null, 00:22:38.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.756 "is_configured": false, 00:22:38.756 "data_offset": 0, 00:22:38.756 "data_size": 7936 00:22:38.757 }, 00:22:38.757 { 00:22:38.757 "name": "BaseBdev2", 00:22:38.757 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:38.757 "is_configured": true, 00:22:38.757 "data_offset": 256, 00:22:38.757 "data_size": 7936 00:22:38.757 } 00:22:38.757 ] 00:22:38.757 }' 00:22:38.757 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.757 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:39.325 "name": "raid_bdev1", 00:22:39.325 "uuid": "e98abd70-645c-4bb5-ba9d-f1794c8851e8", 00:22:39.325 "strip_size_kb": 0, 00:22:39.325 "state": "online", 00:22:39.325 "raid_level": "raid1", 00:22:39.325 "superblock": true, 00:22:39.325 "num_base_bdevs": 2, 00:22:39.325 "num_base_bdevs_discovered": 1, 00:22:39.325 "num_base_bdevs_operational": 1, 00:22:39.325 "base_bdevs_list": [ 00:22:39.325 { 00:22:39.325 "name": null, 00:22:39.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.325 "is_configured": false, 00:22:39.325 "data_offset": 0, 00:22:39.325 "data_size": 7936 00:22:39.325 }, 00:22:39.325 { 00:22:39.325 "name": "BaseBdev2", 00:22:39.325 "uuid": "0e34ce29-3231-55ca-b08d-86f3342984e9", 00:22:39.325 "is_configured": true, 00:22:39.325 "data_offset": 256, 00:22:39.325 "data_size": 7936 00:22:39.325 } 00:22:39.325 ] 00:22:39.325 }' 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:39.325 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88235 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88235 ']' 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88235 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88235 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.585 killing process with pid 88235 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88235' 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88235 00:22:39.585 Received shutdown signal, test time was about 60.000000 seconds 00:22:39.585 00:22:39.585 Latency(us) 00:22:39.585 [2024-12-05T19:42:33.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.585 [2024-12-05T19:42:33.026Z] =================================================================================================================== 00:22:39.585 [2024-12-05T19:42:33.026Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.585 [2024-12-05 19:42:32.806379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.585 19:42:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88235 00:22:39.585 [2024-12-05 19:42:32.806529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.585 [2024-12-05 19:42:32.806594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.585 [2024-12-05 19:42:32.806620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:39.844 [2024-12-05 19:42:33.058493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.783 19:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:40.783 00:22:40.783 real 0m21.469s 00:22:40.783 user 0m29.068s 00:22:40.783 sys 0m2.556s 00:22:40.783 19:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.783 ************************************ 00:22:40.783 END TEST raid_rebuild_test_sb_md_separate 00:22:40.783 19:42:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.783 ************************************ 00:22:40.783 19:42:34 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:40.783 19:42:34 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:40.783 19:42:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:40.783 19:42:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.783 19:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.783 ************************************ 00:22:40.783 START TEST raid_state_function_test_sb_md_interleaved 00:22:40.783 ************************************ 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88939 00:22:40.783 Process raid pid: 88939 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88939' 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88939 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88939 ']' 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.783 19:42:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.043 [2024-12-05 19:42:34.230011] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:41.043 [2024-12-05 19:42:34.230185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.043 [2024-12-05 19:42:34.406193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.302 [2024-12-05 19:42:34.547702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.562 [2024-12-05 19:42:34.762544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.562 [2024-12-05 19:42:34.762609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.822 [2024-12-05 19:42:35.237772] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:41.822 [2024-12-05 19:42:35.237836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:41.822 [2024-12-05 19:42:35.237852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.822 [2024-12-05 19:42:35.237873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.822 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.823 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.086 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.086 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.086 "name": "Existed_Raid", 00:22:42.086 "uuid": "69ffcbf3-5d17-4d77-927b-06783ba34999", 00:22:42.086 "strip_size_kb": 0, 00:22:42.086 "state": "configuring", 00:22:42.086 "raid_level": "raid1", 00:22:42.086 "superblock": true, 00:22:42.086 "num_base_bdevs": 2, 00:22:42.086 "num_base_bdevs_discovered": 0, 00:22:42.086 "num_base_bdevs_operational": 2, 00:22:42.086 "base_bdevs_list": [ 00:22:42.086 { 00:22:42.086 "name": "BaseBdev1", 00:22:42.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.086 "is_configured": false, 00:22:42.086 "data_offset": 0, 00:22:42.086 "data_size": 0 00:22:42.086 }, 00:22:42.086 { 00:22:42.086 "name": "BaseBdev2", 00:22:42.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.086 "is_configured": false, 00:22:42.086 "data_offset": 0, 00:22:42.086 "data_size": 0 00:22:42.086 } 00:22:42.086 ] 00:22:42.086 }' 00:22:42.086 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.086 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.344 [2024-12-05 19:42:35.757943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:42.344 [2024-12-05 19:42:35.757987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.344 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.345 [2024-12-05 19:42:35.765949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.345 [2024-12-05 19:42:35.765997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.345 [2024-12-05 19:42:35.766011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:42.345 [2024-12-05 19:42:35.766030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:42.345 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.345 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:42.345 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.345 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.603 [2024-12-05 19:42:35.811508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.603 BaseBdev1 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.603 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.603 [ 00:22:42.603 { 00:22:42.603 "name": "BaseBdev1", 00:22:42.603 "aliases": [ 00:22:42.603 "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297" 00:22:42.603 ], 00:22:42.603 "product_name": "Malloc disk", 00:22:42.603 "block_size": 4128, 00:22:42.603 "num_blocks": 8192, 00:22:42.603 "uuid": "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297", 00:22:42.603 "md_size": 32, 00:22:42.603 "md_interleave": true, 00:22:42.603 "dif_type": 0, 00:22:42.604 "assigned_rate_limits": { 00:22:42.604 "rw_ios_per_sec": 0, 00:22:42.604 "rw_mbytes_per_sec": 0, 00:22:42.604 "r_mbytes_per_sec": 0, 00:22:42.604 "w_mbytes_per_sec": 0 00:22:42.604 }, 00:22:42.604 "claimed": true, 00:22:42.604 "claim_type": "exclusive_write", 00:22:42.604 "zoned": false, 00:22:42.604 "supported_io_types": { 00:22:42.604 "read": true, 00:22:42.604 "write": true, 00:22:42.604 "unmap": true, 00:22:42.604 "flush": true, 00:22:42.604 "reset": true, 00:22:42.604 "nvme_admin": false, 00:22:42.604 "nvme_io": false, 00:22:42.604 "nvme_io_md": false, 00:22:42.604 "write_zeroes": true, 00:22:42.604 "zcopy": true, 00:22:42.604 "get_zone_info": false, 00:22:42.604 "zone_management": false, 00:22:42.604 "zone_append": false, 00:22:42.604 "compare": false, 00:22:42.604 "compare_and_write": false, 00:22:42.604 "abort": true, 00:22:42.604 "seek_hole": false, 00:22:42.604 "seek_data": false, 00:22:42.604 "copy": true, 00:22:42.604 "nvme_iov_md": false 00:22:42.604 }, 00:22:42.604 "memory_domains": [ 00:22:42.604 { 00:22:42.604 "dma_device_id": "system", 00:22:42.604 "dma_device_type": 1 00:22:42.604 }, 00:22:42.604 { 00:22:42.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.604 "dma_device_type": 2 00:22:42.604 } 00:22:42.604 ], 00:22:42.604 "driver_specific": {} 00:22:42.604 } 00:22:42.604 ] 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.604 "name": "Existed_Raid", 00:22:42.604 "uuid": "42d0ce49-47ed-45dc-b57b-ecde1432cba5", 00:22:42.604 "strip_size_kb": 0, 00:22:42.604 "state": "configuring", 00:22:42.604 "raid_level": "raid1", 00:22:42.604 "superblock": true, 00:22:42.604 "num_base_bdevs": 2, 00:22:42.604 "num_base_bdevs_discovered": 1, 00:22:42.604 "num_base_bdevs_operational": 2, 00:22:42.604 "base_bdevs_list": [ 00:22:42.604 { 00:22:42.604 "name": "BaseBdev1", 00:22:42.604 "uuid": "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297", 00:22:42.604 "is_configured": true, 00:22:42.604 "data_offset": 256, 00:22:42.604 "data_size": 7936 00:22:42.604 }, 00:22:42.604 { 00:22:42.604 "name": "BaseBdev2", 00:22:42.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.604 "is_configured": false, 00:22:42.604 "data_offset": 0, 00:22:42.604 "data_size": 0 00:22:42.604 } 00:22:42.604 ] 00:22:42.604 }' 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.604 19:42:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.172 [2024-12-05 19:42:36.375877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.172 [2024-12-05 19:42:36.375938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.172 [2024-12-05 19:42:36.383929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.172 [2024-12-05 19:42:36.386325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:43.172 [2024-12-05 19:42:36.386373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.172 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.172 "name": "Existed_Raid", 00:22:43.172 "uuid": "361223d1-344f-4d99-93f5-1599cbd69314", 00:22:43.172 "strip_size_kb": 0, 00:22:43.172 "state": "configuring", 00:22:43.172 "raid_level": "raid1", 00:22:43.172 "superblock": true, 00:22:43.172 "num_base_bdevs": 2, 00:22:43.172 "num_base_bdevs_discovered": 1, 00:22:43.172 "num_base_bdevs_operational": 2, 00:22:43.172 "base_bdevs_list": [ 00:22:43.172 { 00:22:43.172 "name": "BaseBdev1", 00:22:43.172 "uuid": "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297", 00:22:43.172 "is_configured": true, 00:22:43.172 "data_offset": 256, 00:22:43.172 "data_size": 7936 00:22:43.172 }, 00:22:43.172 { 00:22:43.172 "name": "BaseBdev2", 00:22:43.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.173 "is_configured": false, 00:22:43.173 "data_offset": 0, 00:22:43.173 "data_size": 0 00:22:43.173 } 00:22:43.173 ] 00:22:43.173 }' 00:22:43.173 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.173 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.741 [2024-12-05 19:42:36.935651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.741 [2024-12-05 19:42:36.935913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:43.741 [2024-12-05 19:42:36.935930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:43.741 [2024-12-05 19:42:36.936081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:43.741 [2024-12-05 19:42:36.936191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:43.741 [2024-12-05 19:42:36.936211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:43.741 [2024-12-05 19:42:36.936294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.741 BaseBdev2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.741 [ 00:22:43.741 { 00:22:43.741 "name": "BaseBdev2", 00:22:43.741 "aliases": [ 00:22:43.741 "ab4d6872-3dda-4c08-87b3-c5caaf313317" 00:22:43.741 ], 00:22:43.741 "product_name": "Malloc disk", 00:22:43.741 "block_size": 4128, 00:22:43.741 "num_blocks": 8192, 00:22:43.741 "uuid": "ab4d6872-3dda-4c08-87b3-c5caaf313317", 00:22:43.741 "md_size": 32, 00:22:43.741 "md_interleave": true, 00:22:43.741 "dif_type": 0, 00:22:43.741 "assigned_rate_limits": { 00:22:43.741 "rw_ios_per_sec": 0, 00:22:43.741 "rw_mbytes_per_sec": 0, 00:22:43.741 "r_mbytes_per_sec": 0, 00:22:43.741 "w_mbytes_per_sec": 0 00:22:43.741 }, 00:22:43.741 "claimed": true, 00:22:43.741 "claim_type": "exclusive_write", 00:22:43.741 "zoned": false, 00:22:43.741 "supported_io_types": { 00:22:43.741 "read": true, 00:22:43.741 "write": true, 00:22:43.741 "unmap": true, 00:22:43.741 "flush": true, 00:22:43.741 "reset": true, 00:22:43.741 "nvme_admin": false, 00:22:43.741 "nvme_io": false, 00:22:43.741 "nvme_io_md": false, 00:22:43.741 "write_zeroes": true, 00:22:43.741 "zcopy": true, 00:22:43.741 "get_zone_info": false, 00:22:43.741 "zone_management": false, 00:22:43.741 "zone_append": false, 00:22:43.741 "compare": false, 00:22:43.741 "compare_and_write": false, 00:22:43.741 "abort": true, 00:22:43.741 "seek_hole": false, 00:22:43.741 "seek_data": false, 00:22:43.741 "copy": true, 00:22:43.741 "nvme_iov_md": false 00:22:43.741 }, 00:22:43.741 "memory_domains": [ 00:22:43.741 { 00:22:43.741 "dma_device_id": "system", 00:22:43.741 "dma_device_type": 1 00:22:43.741 }, 00:22:43.741 { 00:22:43.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.741 "dma_device_type": 2 00:22:43.741 } 00:22:43.741 ], 00:22:43.741 "driver_specific": {} 00:22:43.741 } 00:22:43.741 ] 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.741 19:42:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.741 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.741 "name": "Existed_Raid", 00:22:43.741 "uuid": "361223d1-344f-4d99-93f5-1599cbd69314", 00:22:43.741 "strip_size_kb": 0, 00:22:43.741 "state": "online", 00:22:43.741 "raid_level": "raid1", 00:22:43.741 "superblock": true, 00:22:43.741 "num_base_bdevs": 2, 00:22:43.741 "num_base_bdevs_discovered": 2, 00:22:43.741 "num_base_bdevs_operational": 2, 00:22:43.741 "base_bdevs_list": [ 00:22:43.741 { 00:22:43.741 "name": "BaseBdev1", 00:22:43.741 "uuid": "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297", 00:22:43.741 "is_configured": true, 00:22:43.741 "data_offset": 256, 00:22:43.741 "data_size": 7936 00:22:43.741 }, 00:22:43.741 { 00:22:43.741 "name": "BaseBdev2", 00:22:43.741 "uuid": "ab4d6872-3dda-4c08-87b3-c5caaf313317", 00:22:43.741 "is_configured": true, 00:22:43.741 "data_offset": 256, 00:22:43.741 "data_size": 7936 00:22:43.741 } 00:22:43.741 ] 00:22:43.741 }' 00:22:43.741 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.741 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.310 [2024-12-05 19:42:37.492302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.310 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.310 "name": "Existed_Raid", 00:22:44.310 "aliases": [ 00:22:44.310 "361223d1-344f-4d99-93f5-1599cbd69314" 00:22:44.310 ], 00:22:44.310 "product_name": "Raid Volume", 00:22:44.310 "block_size": 4128, 00:22:44.310 "num_blocks": 7936, 00:22:44.310 "uuid": "361223d1-344f-4d99-93f5-1599cbd69314", 00:22:44.310 "md_size": 32, 00:22:44.310 "md_interleave": true, 00:22:44.310 "dif_type": 0, 00:22:44.310 "assigned_rate_limits": { 00:22:44.310 "rw_ios_per_sec": 0, 00:22:44.310 "rw_mbytes_per_sec": 0, 00:22:44.310 "r_mbytes_per_sec": 0, 00:22:44.310 "w_mbytes_per_sec": 0 00:22:44.310 }, 00:22:44.310 "claimed": false, 00:22:44.310 "zoned": false, 00:22:44.310 "supported_io_types": { 00:22:44.310 "read": true, 00:22:44.310 "write": true, 00:22:44.310 "unmap": false, 00:22:44.310 "flush": false, 00:22:44.310 "reset": true, 00:22:44.310 "nvme_admin": false, 00:22:44.310 "nvme_io": false, 00:22:44.310 "nvme_io_md": false, 00:22:44.310 "write_zeroes": true, 00:22:44.310 "zcopy": false, 00:22:44.310 "get_zone_info": false, 00:22:44.310 "zone_management": false, 00:22:44.310 "zone_append": false, 00:22:44.311 "compare": false, 00:22:44.311 "compare_and_write": false, 00:22:44.311 "abort": false, 00:22:44.311 "seek_hole": false, 00:22:44.311 "seek_data": false, 00:22:44.311 "copy": false, 00:22:44.311 "nvme_iov_md": false 00:22:44.311 }, 00:22:44.311 "memory_domains": [ 00:22:44.311 { 00:22:44.311 "dma_device_id": "system", 00:22:44.311 "dma_device_type": 1 00:22:44.311 }, 00:22:44.311 { 00:22:44.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.311 "dma_device_type": 2 00:22:44.311 }, 00:22:44.311 { 00:22:44.311 "dma_device_id": "system", 00:22:44.311 "dma_device_type": 1 00:22:44.311 }, 00:22:44.311 { 00:22:44.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.311 "dma_device_type": 2 00:22:44.311 } 00:22:44.311 ], 00:22:44.311 "driver_specific": { 00:22:44.311 "raid": { 00:22:44.311 "uuid": "361223d1-344f-4d99-93f5-1599cbd69314", 00:22:44.311 "strip_size_kb": 0, 00:22:44.311 "state": "online", 00:22:44.311 "raid_level": "raid1", 00:22:44.311 "superblock": true, 00:22:44.311 "num_base_bdevs": 2, 00:22:44.311 "num_base_bdevs_discovered": 2, 00:22:44.311 "num_base_bdevs_operational": 2, 00:22:44.311 "base_bdevs_list": [ 00:22:44.311 { 00:22:44.311 "name": "BaseBdev1", 00:22:44.311 "uuid": "a7786c9e-d6ab-47ab-b7e5-ee3bc998e297", 00:22:44.311 "is_configured": true, 00:22:44.311 "data_offset": 256, 00:22:44.311 "data_size": 7936 00:22:44.311 }, 00:22:44.311 { 00:22:44.311 "name": "BaseBdev2", 00:22:44.311 "uuid": "ab4d6872-3dda-4c08-87b3-c5caaf313317", 00:22:44.311 "is_configured": true, 00:22:44.311 "data_offset": 256, 00:22:44.311 "data_size": 7936 00:22:44.311 } 00:22:44.311 ] 00:22:44.311 } 00:22:44.311 } 00:22:44.311 }' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:44.311 BaseBdev2' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.311 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.570 [2024-12-05 19:42:37.752078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.570 "name": "Existed_Raid", 00:22:44.570 "uuid": "361223d1-344f-4d99-93f5-1599cbd69314", 00:22:44.570 "strip_size_kb": 0, 00:22:44.570 "state": "online", 00:22:44.570 "raid_level": "raid1", 00:22:44.570 "superblock": true, 00:22:44.570 "num_base_bdevs": 2, 00:22:44.570 "num_base_bdevs_discovered": 1, 00:22:44.570 "num_base_bdevs_operational": 1, 00:22:44.570 "base_bdevs_list": [ 00:22:44.570 { 00:22:44.570 "name": null, 00:22:44.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.570 "is_configured": false, 00:22:44.570 "data_offset": 0, 00:22:44.570 "data_size": 7936 00:22:44.570 }, 00:22:44.570 { 00:22:44.570 "name": "BaseBdev2", 00:22:44.570 "uuid": "ab4d6872-3dda-4c08-87b3-c5caaf313317", 00:22:44.570 "is_configured": true, 00:22:44.570 "data_offset": 256, 00:22:44.570 "data_size": 7936 00:22:44.570 } 00:22:44.570 ] 00:22:44.570 }' 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.570 19:42:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.140 [2024-12-05 19:42:38.410648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.140 [2024-12-05 19:42:38.410844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.140 [2024-12-05 19:42:38.494303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.140 [2024-12-05 19:42:38.494366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.140 [2024-12-05 19:42:38.494386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88939 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88939 ']' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88939 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.140 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88939 00:22:45.400 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.400 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.400 killing process with pid 88939 00:22:45.400 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88939' 00:22:45.400 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88939 00:22:45.400 [2024-12-05 19:42:38.582941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.400 19:42:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88939 00:22:45.400 [2024-12-05 19:42:38.598787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.363 19:42:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:46.363 00:22:46.363 real 0m5.540s 00:22:46.363 user 0m8.358s 00:22:46.363 sys 0m0.815s 00:22:46.363 19:42:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.363 19:42:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.363 ************************************ 00:22:46.363 END TEST raid_state_function_test_sb_md_interleaved 00:22:46.363 ************************************ 00:22:46.363 19:42:39 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:46.363 19:42:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:46.363 19:42:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.363 19:42:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.363 ************************************ 00:22:46.363 START TEST raid_superblock_test_md_interleaved 00:22:46.363 ************************************ 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:46.363 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89192 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89192 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89192 ']' 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.364 19:42:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.623 [2024-12-05 19:42:39.820048] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:46.623 [2024-12-05 19:42:39.820197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89192 ] 00:22:46.623 [2024-12-05 19:42:39.995168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.882 [2024-12-05 19:42:40.130301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.141 [2024-12-05 19:42:40.336492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.141 [2024-12-05 19:42:40.336610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.709 malloc1 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.709 [2024-12-05 19:42:40.893365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:47.709 [2024-12-05 19:42:40.893428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.709 [2024-12-05 19:42:40.893459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:47.709 [2024-12-05 19:42:40.893474] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.709 [2024-12-05 19:42:40.896005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.709 [2024-12-05 19:42:40.896046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:47.709 pt1 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:47.709 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.710 malloc2 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.710 [2024-12-05 19:42:40.943996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:47.710 [2024-12-05 19:42:40.944062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.710 [2024-12-05 19:42:40.944092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:47.710 [2024-12-05 19:42:40.944107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.710 [2024-12-05 19:42:40.946682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.710 [2024-12-05 19:42:40.946726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:47.710 pt2 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.710 [2024-12-05 19:42:40.952039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:47.710 [2024-12-05 19:42:40.954724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:47.710 [2024-12-05 19:42:40.955037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:47.710 [2024-12-05 19:42:40.955067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:47.710 [2024-12-05 19:42:40.955193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:47.710 [2024-12-05 19:42:40.955330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:47.710 [2024-12-05 19:42:40.955350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:47.710 [2024-12-05 19:42:40.955442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.710 19:42:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.710 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.710 "name": "raid_bdev1", 00:22:47.710 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:47.710 "strip_size_kb": 0, 00:22:47.710 "state": "online", 00:22:47.710 "raid_level": "raid1", 00:22:47.710 "superblock": true, 00:22:47.710 "num_base_bdevs": 2, 00:22:47.710 "num_base_bdevs_discovered": 2, 00:22:47.710 "num_base_bdevs_operational": 2, 00:22:47.710 "base_bdevs_list": [ 00:22:47.710 { 00:22:47.710 "name": "pt1", 00:22:47.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:47.710 "is_configured": true, 00:22:47.710 "data_offset": 256, 00:22:47.710 "data_size": 7936 00:22:47.710 }, 00:22:47.710 { 00:22:47.710 "name": "pt2", 00:22:47.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:47.710 "is_configured": true, 00:22:47.710 "data_offset": 256, 00:22:47.710 "data_size": 7936 00:22:47.710 } 00:22:47.710 ] 00:22:47.710 }' 00:22:47.710 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.710 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.278 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:48.278 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:48.278 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:48.278 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:48.278 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.279 [2024-12-05 19:42:41.508535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.279 "name": "raid_bdev1", 00:22:48.279 "aliases": [ 00:22:48.279 "c563c851-17ee-4f47-8f08-e4f439d3b3da" 00:22:48.279 ], 00:22:48.279 "product_name": "Raid Volume", 00:22:48.279 "block_size": 4128, 00:22:48.279 "num_blocks": 7936, 00:22:48.279 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:48.279 "md_size": 32, 00:22:48.279 "md_interleave": true, 00:22:48.279 "dif_type": 0, 00:22:48.279 "assigned_rate_limits": { 00:22:48.279 "rw_ios_per_sec": 0, 00:22:48.279 "rw_mbytes_per_sec": 0, 00:22:48.279 "r_mbytes_per_sec": 0, 00:22:48.279 "w_mbytes_per_sec": 0 00:22:48.279 }, 00:22:48.279 "claimed": false, 00:22:48.279 "zoned": false, 00:22:48.279 "supported_io_types": { 00:22:48.279 "read": true, 00:22:48.279 "write": true, 00:22:48.279 "unmap": false, 00:22:48.279 "flush": false, 00:22:48.279 "reset": true, 00:22:48.279 "nvme_admin": false, 00:22:48.279 "nvme_io": false, 00:22:48.279 "nvme_io_md": false, 00:22:48.279 "write_zeroes": true, 00:22:48.279 "zcopy": false, 00:22:48.279 "get_zone_info": false, 00:22:48.279 "zone_management": false, 00:22:48.279 "zone_append": false, 00:22:48.279 "compare": false, 00:22:48.279 "compare_and_write": false, 00:22:48.279 "abort": false, 00:22:48.279 "seek_hole": false, 00:22:48.279 "seek_data": false, 00:22:48.279 "copy": false, 00:22:48.279 "nvme_iov_md": false 00:22:48.279 }, 00:22:48.279 "memory_domains": [ 00:22:48.279 { 00:22:48.279 "dma_device_id": "system", 00:22:48.279 "dma_device_type": 1 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.279 "dma_device_type": 2 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "dma_device_id": "system", 00:22:48.279 "dma_device_type": 1 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.279 "dma_device_type": 2 00:22:48.279 } 00:22:48.279 ], 00:22:48.279 "driver_specific": { 00:22:48.279 "raid": { 00:22:48.279 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:48.279 "strip_size_kb": 0, 00:22:48.279 "state": "online", 00:22:48.279 "raid_level": "raid1", 00:22:48.279 "superblock": true, 00:22:48.279 "num_base_bdevs": 2, 00:22:48.279 "num_base_bdevs_discovered": 2, 00:22:48.279 "num_base_bdevs_operational": 2, 00:22:48.279 "base_bdevs_list": [ 00:22:48.279 { 00:22:48.279 "name": "pt1", 00:22:48.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:48.279 "is_configured": true, 00:22:48.279 "data_offset": 256, 00:22:48.279 "data_size": 7936 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "name": "pt2", 00:22:48.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.279 "is_configured": true, 00:22:48.279 "data_offset": 256, 00:22:48.279 "data_size": 7936 00:22:48.279 } 00:22:48.279 ] 00:22:48.279 } 00:22:48.279 } 00:22:48.279 }' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:48.279 pt2' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.279 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.539 [2024-12-05 19:42:41.792537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c563c851-17ee-4f47-8f08-e4f439d3b3da 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c563c851-17ee-4f47-8f08-e4f439d3b3da ']' 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.539 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.539 [2024-12-05 19:42:41.844179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.539 [2024-12-05 19:42:41.844210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.539 [2024-12-05 19:42:41.844318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.539 [2024-12-05 19:42:41.844393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.540 [2024-12-05 19:42:41.844412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.540 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:48.799 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.799 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.799 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.799 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.799 [2024-12-05 19:42:41.984238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:48.799 [2024-12-05 19:42:41.986868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:48.799 [2024-12-05 19:42:41.986970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:48.799 [2024-12-05 19:42:41.987065] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:48.799 [2024-12-05 19:42:41.987109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.799 [2024-12-05 19:42:41.987154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:48.799 request: 00:22:48.799 { 00:22:48.799 "name": "raid_bdev1", 00:22:48.799 "raid_level": "raid1", 00:22:48.799 "base_bdevs": [ 00:22:48.799 "malloc1", 00:22:48.799 "malloc2" 00:22:48.799 ], 00:22:48.799 "superblock": false, 00:22:48.799 "method": "bdev_raid_create", 00:22:48.799 "req_id": 1 00:22:48.799 } 00:22:48.799 Got JSON-RPC error response 00:22:48.799 response: 00:22:48.799 { 00:22:48.799 "code": -17, 00:22:48.799 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:48.799 } 00:22:48.799 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.800 19:42:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.800 [2024-12-05 19:42:42.052243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:48.800 [2024-12-05 19:42:42.052300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.800 [2024-12-05 19:42:42.052324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:48.800 [2024-12-05 19:42:42.052341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.800 [2024-12-05 19:42:42.054915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.800 [2024-12-05 19:42:42.054957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:48.800 [2024-12-05 19:42:42.055021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:48.800 [2024-12-05 19:42:42.055092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:48.800 pt1 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.800 "name": "raid_bdev1", 00:22:48.800 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:48.800 "strip_size_kb": 0, 00:22:48.800 "state": "configuring", 00:22:48.800 "raid_level": "raid1", 00:22:48.800 "superblock": true, 00:22:48.800 "num_base_bdevs": 2, 00:22:48.800 "num_base_bdevs_discovered": 1, 00:22:48.800 "num_base_bdevs_operational": 2, 00:22:48.800 "base_bdevs_list": [ 00:22:48.800 { 00:22:48.800 "name": "pt1", 00:22:48.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:48.800 "is_configured": true, 00:22:48.800 "data_offset": 256, 00:22:48.800 "data_size": 7936 00:22:48.800 }, 00:22:48.800 { 00:22:48.800 "name": null, 00:22:48.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.800 "is_configured": false, 00:22:48.800 "data_offset": 256, 00:22:48.800 "data_size": 7936 00:22:48.800 } 00:22:48.800 ] 00:22:48.800 }' 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.800 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.367 [2024-12-05 19:42:42.576400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:49.367 [2024-12-05 19:42:42.576483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.367 [2024-12-05 19:42:42.576515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:49.367 [2024-12-05 19:42:42.576532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.367 [2024-12-05 19:42:42.576759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.367 [2024-12-05 19:42:42.576789] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:49.367 [2024-12-05 19:42:42.576856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:49.367 [2024-12-05 19:42:42.576891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:49.367 [2024-12-05 19:42:42.577004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:49.367 [2024-12-05 19:42:42.577024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:49.367 [2024-12-05 19:42:42.577111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:49.367 [2024-12-05 19:42:42.577217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:49.367 [2024-12-05 19:42:42.577232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:49.367 [2024-12-05 19:42:42.577316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.367 pt2 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:49.367 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.368 "name": "raid_bdev1", 00:22:49.368 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:49.368 "strip_size_kb": 0, 00:22:49.368 "state": "online", 00:22:49.368 "raid_level": "raid1", 00:22:49.368 "superblock": true, 00:22:49.368 "num_base_bdevs": 2, 00:22:49.368 "num_base_bdevs_discovered": 2, 00:22:49.368 "num_base_bdevs_operational": 2, 00:22:49.368 "base_bdevs_list": [ 00:22:49.368 { 00:22:49.368 "name": "pt1", 00:22:49.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:49.368 "is_configured": true, 00:22:49.368 "data_offset": 256, 00:22:49.368 "data_size": 7936 00:22:49.368 }, 00:22:49.368 { 00:22:49.368 "name": "pt2", 00:22:49.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:49.368 "is_configured": true, 00:22:49.368 "data_offset": 256, 00:22:49.368 "data_size": 7936 00:22:49.368 } 00:22:49.368 ] 00:22:49.368 }' 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.368 19:42:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.936 [2024-12-05 19:42:43.128962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.936 "name": "raid_bdev1", 00:22:49.936 "aliases": [ 00:22:49.936 "c563c851-17ee-4f47-8f08-e4f439d3b3da" 00:22:49.936 ], 00:22:49.936 "product_name": "Raid Volume", 00:22:49.936 "block_size": 4128, 00:22:49.936 "num_blocks": 7936, 00:22:49.936 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:49.936 "md_size": 32, 00:22:49.936 "md_interleave": true, 00:22:49.936 "dif_type": 0, 00:22:49.936 "assigned_rate_limits": { 00:22:49.936 "rw_ios_per_sec": 0, 00:22:49.936 "rw_mbytes_per_sec": 0, 00:22:49.936 "r_mbytes_per_sec": 0, 00:22:49.936 "w_mbytes_per_sec": 0 00:22:49.936 }, 00:22:49.936 "claimed": false, 00:22:49.936 "zoned": false, 00:22:49.936 "supported_io_types": { 00:22:49.936 "read": true, 00:22:49.936 "write": true, 00:22:49.936 "unmap": false, 00:22:49.936 "flush": false, 00:22:49.936 "reset": true, 00:22:49.936 "nvme_admin": false, 00:22:49.936 "nvme_io": false, 00:22:49.936 "nvme_io_md": false, 00:22:49.936 "write_zeroes": true, 00:22:49.936 "zcopy": false, 00:22:49.936 "get_zone_info": false, 00:22:49.936 "zone_management": false, 00:22:49.936 "zone_append": false, 00:22:49.936 "compare": false, 00:22:49.936 "compare_and_write": false, 00:22:49.936 "abort": false, 00:22:49.936 "seek_hole": false, 00:22:49.936 "seek_data": false, 00:22:49.936 "copy": false, 00:22:49.936 "nvme_iov_md": false 00:22:49.936 }, 00:22:49.936 "memory_domains": [ 00:22:49.936 { 00:22:49.936 "dma_device_id": "system", 00:22:49.936 "dma_device_type": 1 00:22:49.936 }, 00:22:49.936 { 00:22:49.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.936 "dma_device_type": 2 00:22:49.936 }, 00:22:49.936 { 00:22:49.936 "dma_device_id": "system", 00:22:49.936 "dma_device_type": 1 00:22:49.936 }, 00:22:49.936 { 00:22:49.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.936 "dma_device_type": 2 00:22:49.936 } 00:22:49.936 ], 00:22:49.936 "driver_specific": { 00:22:49.936 "raid": { 00:22:49.936 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:49.936 "strip_size_kb": 0, 00:22:49.936 "state": "online", 00:22:49.936 "raid_level": "raid1", 00:22:49.936 "superblock": true, 00:22:49.936 "num_base_bdevs": 2, 00:22:49.936 "num_base_bdevs_discovered": 2, 00:22:49.936 "num_base_bdevs_operational": 2, 00:22:49.936 "base_bdevs_list": [ 00:22:49.936 { 00:22:49.936 "name": "pt1", 00:22:49.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:49.936 "is_configured": true, 00:22:49.936 "data_offset": 256, 00:22:49.936 "data_size": 7936 00:22:49.936 }, 00:22:49.936 { 00:22:49.936 "name": "pt2", 00:22:49.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:49.936 "is_configured": true, 00:22:49.936 "data_offset": 256, 00:22:49.936 "data_size": 7936 00:22:49.936 } 00:22:49.936 ] 00:22:49.936 } 00:22:49.936 } 00:22:49.936 }' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:49.936 pt2' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:49.936 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.196 [2024-12-05 19:42:43.400984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c563c851-17ee-4f47-8f08-e4f439d3b3da '!=' c563c851-17ee-4f47-8f08-e4f439d3b3da ']' 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.196 [2024-12-05 19:42:43.448704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.196 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.196 "name": "raid_bdev1", 00:22:50.196 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:50.196 "strip_size_kb": 0, 00:22:50.196 "state": "online", 00:22:50.196 "raid_level": "raid1", 00:22:50.196 "superblock": true, 00:22:50.196 "num_base_bdevs": 2, 00:22:50.196 "num_base_bdevs_discovered": 1, 00:22:50.196 "num_base_bdevs_operational": 1, 00:22:50.196 "base_bdevs_list": [ 00:22:50.196 { 00:22:50.196 "name": null, 00:22:50.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.197 "is_configured": false, 00:22:50.197 "data_offset": 0, 00:22:50.197 "data_size": 7936 00:22:50.197 }, 00:22:50.197 { 00:22:50.197 "name": "pt2", 00:22:50.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.197 "is_configured": true, 00:22:50.197 "data_offset": 256, 00:22:50.197 "data_size": 7936 00:22:50.197 } 00:22:50.197 ] 00:22:50.197 }' 00:22:50.197 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.197 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 [2024-12-05 19:42:43.984870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:50.765 [2024-12-05 19:42:43.985034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.765 [2024-12-05 19:42:43.985259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.765 [2024-12-05 19:42:43.985476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.765 [2024-12-05 19:42:43.985637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 19:42:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 [2024-12-05 19:42:44.064886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:50.765 [2024-12-05 19:42:44.064945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.765 [2024-12-05 19:42:44.064969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:50.765 [2024-12-05 19:42:44.064984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.765 [2024-12-05 19:42:44.067754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.765 [2024-12-05 19:42:44.067853] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:50.765 [2024-12-05 19:42:44.067914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:50.765 [2024-12-05 19:42:44.067981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:50.765 [2024-12-05 19:42:44.068085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:50.765 [2024-12-05 19:42:44.068106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:50.765 [2024-12-05 19:42:44.068213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:50.765 [2024-12-05 19:42:44.068301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:50.765 [2024-12-05 19:42:44.068322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:50.765 [2024-12-05 19:42:44.068411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.765 pt2 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.765 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.765 "name": "raid_bdev1", 00:22:50.765 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:50.765 "strip_size_kb": 0, 00:22:50.765 "state": "online", 00:22:50.765 "raid_level": "raid1", 00:22:50.765 "superblock": true, 00:22:50.765 "num_base_bdevs": 2, 00:22:50.765 "num_base_bdevs_discovered": 1, 00:22:50.765 "num_base_bdevs_operational": 1, 00:22:50.765 "base_bdevs_list": [ 00:22:50.765 { 00:22:50.765 "name": null, 00:22:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.765 "is_configured": false, 00:22:50.765 "data_offset": 256, 00:22:50.766 "data_size": 7936 00:22:50.766 }, 00:22:50.766 { 00:22:50.766 "name": "pt2", 00:22:50.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.766 "is_configured": true, 00:22:50.766 "data_offset": 256, 00:22:50.766 "data_size": 7936 00:22:50.766 } 00:22:50.766 ] 00:22:50.766 }' 00:22:50.766 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.766 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.334 [2024-12-05 19:42:44.601040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.334 [2024-12-05 19:42:44.601303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.334 [2024-12-05 19:42:44.601504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.334 [2024-12-05 19:42:44.601676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.334 [2024-12-05 19:42:44.601872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.334 [2024-12-05 19:42:44.661064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:51.334 [2024-12-05 19:42:44.661186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.334 [2024-12-05 19:42:44.661212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:51.334 [2024-12-05 19:42:44.661225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.334 [2024-12-05 19:42:44.663871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.334 [2024-12-05 19:42:44.664035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:51.334 [2024-12-05 19:42:44.664119] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:51.334 [2024-12-05 19:42:44.664178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:51.334 [2024-12-05 19:42:44.664321] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:51.334 [2024-12-05 19:42:44.664338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.334 [2024-12-05 19:42:44.664359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:51.334 [2024-12-05 19:42:44.664459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:51.334 [2024-12-05 19:42:44.664555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:51.334 [2024-12-05 19:42:44.664570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:51.334 [2024-12-05 19:42:44.664688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:51.334 [2024-12-05 19:42:44.664809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:51.334 [2024-12-05 19:42:44.664866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:51.334 [2024-12-05 19:42:44.664962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.334 pt1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.334 "name": "raid_bdev1", 00:22:51.334 "uuid": "c563c851-17ee-4f47-8f08-e4f439d3b3da", 00:22:51.334 "strip_size_kb": 0, 00:22:51.334 "state": "online", 00:22:51.334 "raid_level": "raid1", 00:22:51.334 "superblock": true, 00:22:51.334 "num_base_bdevs": 2, 00:22:51.334 "num_base_bdevs_discovered": 1, 00:22:51.334 "num_base_bdevs_operational": 1, 00:22:51.334 "base_bdevs_list": [ 00:22:51.334 { 00:22:51.334 "name": null, 00:22:51.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.334 "is_configured": false, 00:22:51.334 "data_offset": 256, 00:22:51.334 "data_size": 7936 00:22:51.334 }, 00:22:51.334 { 00:22:51.334 "name": "pt2", 00:22:51.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:51.334 "is_configured": true, 00:22:51.334 "data_offset": 256, 00:22:51.334 "data_size": 7936 00:22:51.334 } 00:22:51.334 ] 00:22:51.334 }' 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.334 19:42:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:51.901 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.902 [2024-12-05 19:42:45.233487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c563c851-17ee-4f47-8f08-e4f439d3b3da '!=' c563c851-17ee-4f47-8f08-e4f439d3b3da ']' 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89192 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89192 ']' 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89192 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89192 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.902 killing process with pid 89192 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89192' 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89192 00:22:51.902 [2024-12-05 19:42:45.313640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.902 19:42:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89192 00:22:51.902 [2024-12-05 19:42:45.313786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.902 [2024-12-05 19:42:45.313861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.902 [2024-12-05 19:42:45.313885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:52.160 [2024-12-05 19:42:45.489336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:53.098 19:42:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:53.098 00:22:53.098 real 0m6.794s 00:22:53.098 user 0m10.876s 00:22:53.098 sys 0m0.956s 00:22:53.098 19:42:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.098 ************************************ 00:22:53.098 END TEST raid_superblock_test_md_interleaved 00:22:53.098 ************************************ 00:22:53.098 19:42:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.357 19:42:46 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:53.357 19:42:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:53.357 19:42:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.357 19:42:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:53.357 ************************************ 00:22:53.357 START TEST raid_rebuild_test_sb_md_interleaved 00:22:53.357 ************************************ 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:53.357 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89520 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89520 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89520 ']' 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.358 19:42:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.358 [2024-12-05 19:42:46.690358] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:22:53.358 [2024-12-05 19:42:46.690817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89520 ] 00:22:53.358 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:53.358 Zero copy mechanism will not be used. 00:22:53.616 [2024-12-05 19:42:46.876897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.616 [2024-12-05 19:42:47.012543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.874 [2024-12-05 19:42:47.224925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.874 [2024-12-05 19:42:47.225263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 BaseBdev1_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 [2024-12-05 19:42:47.757703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:54.441 [2024-12-05 19:42:47.757851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.441 [2024-12-05 19:42:47.757892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:54.441 [2024-12-05 19:42:47.757927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.441 [2024-12-05 19:42:47.760511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.441 [2024-12-05 19:42:47.760572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:54.441 BaseBdev1 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 BaseBdev2_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 [2024-12-05 19:42:47.807943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:54.441 [2024-12-05 19:42:47.808025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.441 [2024-12-05 19:42:47.808053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:54.441 [2024-12-05 19:42:47.808070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.441 [2024-12-05 19:42:47.810554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.441 [2024-12-05 19:42:47.810620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:54.441 BaseBdev2 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 spare_malloc 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.441 spare_delay 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.441 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.699 [2024-12-05 19:42:47.884935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:54.699 [2024-12-05 19:42:47.885010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.699 [2024-12-05 19:42:47.885041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:54.699 [2024-12-05 19:42:47.885059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.699 [2024-12-05 19:42:47.887622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.699 [2024-12-05 19:42:47.887689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:54.699 spare 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.700 [2024-12-05 19:42:47.897008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.700 [2024-12-05 19:42:47.899478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.700 [2024-12-05 19:42:47.899739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:54.700 [2024-12-05 19:42:47.899761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:54.700 [2024-12-05 19:42:47.899849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:54.700 [2024-12-05 19:42:47.899945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:54.700 [2024-12-05 19:42:47.899958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:54.700 [2024-12-05 19:42:47.900079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.700 "name": "raid_bdev1", 00:22:54.700 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:54.700 "strip_size_kb": 0, 00:22:54.700 "state": "online", 00:22:54.700 "raid_level": "raid1", 00:22:54.700 "superblock": true, 00:22:54.700 "num_base_bdevs": 2, 00:22:54.700 "num_base_bdevs_discovered": 2, 00:22:54.700 "num_base_bdevs_operational": 2, 00:22:54.700 "base_bdevs_list": [ 00:22:54.700 { 00:22:54.700 "name": "BaseBdev1", 00:22:54.700 "uuid": "53bdd1eb-1cb8-5451-bf83-6ff6021d44d1", 00:22:54.700 "is_configured": true, 00:22:54.700 "data_offset": 256, 00:22:54.700 "data_size": 7936 00:22:54.700 }, 00:22:54.700 { 00:22:54.700 "name": "BaseBdev2", 00:22:54.700 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:54.700 "is_configured": true, 00:22:54.700 "data_offset": 256, 00:22:54.700 "data_size": 7936 00:22:54.700 } 00:22:54.700 ] 00:22:54.700 }' 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.700 19:42:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:55.273 [2024-12-05 19:42:48.405591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 [2024-12-05 19:42:48.501160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.273 "name": "raid_bdev1", 00:22:55.273 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:55.273 "strip_size_kb": 0, 00:22:55.273 "state": "online", 00:22:55.273 "raid_level": "raid1", 00:22:55.273 "superblock": true, 00:22:55.273 "num_base_bdevs": 2, 00:22:55.273 "num_base_bdevs_discovered": 1, 00:22:55.273 "num_base_bdevs_operational": 1, 00:22:55.273 "base_bdevs_list": [ 00:22:55.273 { 00:22:55.273 "name": null, 00:22:55.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.273 "is_configured": false, 00:22:55.273 "data_offset": 0, 00:22:55.273 "data_size": 7936 00:22:55.273 }, 00:22:55.273 { 00:22:55.273 "name": "BaseBdev2", 00:22:55.273 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:55.273 "is_configured": true, 00:22:55.273 "data_offset": 256, 00:22:55.273 "data_size": 7936 00:22:55.273 } 00:22:55.273 ] 00:22:55.273 }' 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.273 19:42:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.849 19:42:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:55.849 19:42:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.849 19:42:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.849 [2024-12-05 19:42:49.021400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.849 [2024-12-05 19:42:49.039230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:55.849 19:42:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.849 19:42:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:55.849 [2024-12-05 19:42:49.041883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.782 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.783 "name": "raid_bdev1", 00:22:56.783 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:56.783 "strip_size_kb": 0, 00:22:56.783 "state": "online", 00:22:56.783 "raid_level": "raid1", 00:22:56.783 "superblock": true, 00:22:56.783 "num_base_bdevs": 2, 00:22:56.783 "num_base_bdevs_discovered": 2, 00:22:56.783 "num_base_bdevs_operational": 2, 00:22:56.783 "process": { 00:22:56.783 "type": "rebuild", 00:22:56.783 "target": "spare", 00:22:56.783 "progress": { 00:22:56.783 "blocks": 2560, 00:22:56.783 "percent": 32 00:22:56.783 } 00:22:56.783 }, 00:22:56.783 "base_bdevs_list": [ 00:22:56.783 { 00:22:56.783 "name": "spare", 00:22:56.783 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:22:56.783 "is_configured": true, 00:22:56.783 "data_offset": 256, 00:22:56.783 "data_size": 7936 00:22:56.783 }, 00:22:56.783 { 00:22:56.783 "name": "BaseBdev2", 00:22:56.783 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:56.783 "is_configured": true, 00:22:56.783 "data_offset": 256, 00:22:56.783 "data_size": 7936 00:22:56.783 } 00:22:56.783 ] 00:22:56.783 }' 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.783 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.783 [2024-12-05 19:42:50.215156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.040 [2024-12-05 19:42:50.251078] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:57.040 [2024-12-05 19:42:50.251230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.040 [2024-12-05 19:42:50.251253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.040 [2024-12-05 19:42:50.251287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.040 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.040 "name": "raid_bdev1", 00:22:57.040 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:57.040 "strip_size_kb": 0, 00:22:57.040 "state": "online", 00:22:57.040 "raid_level": "raid1", 00:22:57.040 "superblock": true, 00:22:57.041 "num_base_bdevs": 2, 00:22:57.041 "num_base_bdevs_discovered": 1, 00:22:57.041 "num_base_bdevs_operational": 1, 00:22:57.041 "base_bdevs_list": [ 00:22:57.041 { 00:22:57.041 "name": null, 00:22:57.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.041 "is_configured": false, 00:22:57.041 "data_offset": 0, 00:22:57.041 "data_size": 7936 00:22:57.041 }, 00:22:57.041 { 00:22:57.041 "name": "BaseBdev2", 00:22:57.041 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:57.041 "is_configured": true, 00:22:57.041 "data_offset": 256, 00:22:57.041 "data_size": 7936 00:22:57.041 } 00:22:57.041 ] 00:22:57.041 }' 00:22:57.041 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.041 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.608 "name": "raid_bdev1", 00:22:57.608 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:57.608 "strip_size_kb": 0, 00:22:57.608 "state": "online", 00:22:57.608 "raid_level": "raid1", 00:22:57.608 "superblock": true, 00:22:57.608 "num_base_bdevs": 2, 00:22:57.608 "num_base_bdevs_discovered": 1, 00:22:57.608 "num_base_bdevs_operational": 1, 00:22:57.608 "base_bdevs_list": [ 00:22:57.608 { 00:22:57.608 "name": null, 00:22:57.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.608 "is_configured": false, 00:22:57.608 "data_offset": 0, 00:22:57.608 "data_size": 7936 00:22:57.608 }, 00:22:57.608 { 00:22:57.608 "name": "BaseBdev2", 00:22:57.608 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:57.608 "is_configured": true, 00:22:57.608 "data_offset": 256, 00:22:57.608 "data_size": 7936 00:22:57.608 } 00:22:57.608 ] 00:22:57.608 }' 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.608 19:42:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.608 [2024-12-05 19:42:50.986076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.608 [2024-12-05 19:42:51.002809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:57.608 19:42:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.608 19:42:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:57.608 [2024-12-05 19:42:51.005420] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.981 "name": "raid_bdev1", 00:22:58.981 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:58.981 "strip_size_kb": 0, 00:22:58.981 "state": "online", 00:22:58.981 "raid_level": "raid1", 00:22:58.981 "superblock": true, 00:22:58.981 "num_base_bdevs": 2, 00:22:58.981 "num_base_bdevs_discovered": 2, 00:22:58.981 "num_base_bdevs_operational": 2, 00:22:58.981 "process": { 00:22:58.981 "type": "rebuild", 00:22:58.981 "target": "spare", 00:22:58.981 "progress": { 00:22:58.981 "blocks": 2560, 00:22:58.981 "percent": 32 00:22:58.981 } 00:22:58.981 }, 00:22:58.981 "base_bdevs_list": [ 00:22:58.981 { 00:22:58.981 "name": "spare", 00:22:58.981 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:22:58.981 "is_configured": true, 00:22:58.981 "data_offset": 256, 00:22:58.981 "data_size": 7936 00:22:58.981 }, 00:22:58.981 { 00:22:58.981 "name": "BaseBdev2", 00:22:58.981 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:58.981 "is_configured": true, 00:22:58.981 "data_offset": 256, 00:22:58.981 "data_size": 7936 00:22:58.981 } 00:22:58.981 ] 00:22:58.981 }' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:58.981 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=806 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.981 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.982 "name": "raid_bdev1", 00:22:58.982 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:22:58.982 "strip_size_kb": 0, 00:22:58.982 "state": "online", 00:22:58.982 "raid_level": "raid1", 00:22:58.982 "superblock": true, 00:22:58.982 "num_base_bdevs": 2, 00:22:58.982 "num_base_bdevs_discovered": 2, 00:22:58.982 "num_base_bdevs_operational": 2, 00:22:58.982 "process": { 00:22:58.982 "type": "rebuild", 00:22:58.982 "target": "spare", 00:22:58.982 "progress": { 00:22:58.982 "blocks": 2816, 00:22:58.982 "percent": 35 00:22:58.982 } 00:22:58.982 }, 00:22:58.982 "base_bdevs_list": [ 00:22:58.982 { 00:22:58.982 "name": "spare", 00:22:58.982 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:22:58.982 "is_configured": true, 00:22:58.982 "data_offset": 256, 00:22:58.982 "data_size": 7936 00:22:58.982 }, 00:22:58.982 { 00:22:58.982 "name": "BaseBdev2", 00:22:58.982 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:22:58.982 "is_configured": true, 00:22:58.982 "data_offset": 256, 00:22:58.982 "data_size": 7936 00:22:58.982 } 00:22:58.982 ] 00:22:58.982 }' 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.982 19:42:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.919 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.177 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:00.177 "name": "raid_bdev1", 00:23:00.177 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:00.177 "strip_size_kb": 0, 00:23:00.177 "state": "online", 00:23:00.177 "raid_level": "raid1", 00:23:00.177 "superblock": true, 00:23:00.177 "num_base_bdevs": 2, 00:23:00.177 "num_base_bdevs_discovered": 2, 00:23:00.177 "num_base_bdevs_operational": 2, 00:23:00.177 "process": { 00:23:00.177 "type": "rebuild", 00:23:00.177 "target": "spare", 00:23:00.177 "progress": { 00:23:00.177 "blocks": 5888, 00:23:00.177 "percent": 74 00:23:00.177 } 00:23:00.177 }, 00:23:00.177 "base_bdevs_list": [ 00:23:00.177 { 00:23:00.177 "name": "spare", 00:23:00.177 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:00.177 "is_configured": true, 00:23:00.177 "data_offset": 256, 00:23:00.177 "data_size": 7936 00:23:00.177 }, 00:23:00.177 { 00:23:00.177 "name": "BaseBdev2", 00:23:00.178 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:00.178 "is_configured": true, 00:23:00.178 "data_offset": 256, 00:23:00.178 "data_size": 7936 00:23:00.178 } 00:23:00.178 ] 00:23:00.178 }' 00:23:00.178 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:00.178 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.178 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:00.178 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.178 19:42:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:00.746 [2024-12-05 19:42:54.129610] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:00.746 [2024-12-05 19:42:54.129728] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:00.746 [2024-12-05 19:42:54.129969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.312 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.313 "name": "raid_bdev1", 00:23:01.313 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:01.313 "strip_size_kb": 0, 00:23:01.313 "state": "online", 00:23:01.313 "raid_level": "raid1", 00:23:01.313 "superblock": true, 00:23:01.313 "num_base_bdevs": 2, 00:23:01.313 "num_base_bdevs_discovered": 2, 00:23:01.313 "num_base_bdevs_operational": 2, 00:23:01.313 "base_bdevs_list": [ 00:23:01.313 { 00:23:01.313 "name": "spare", 00:23:01.313 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 256, 00:23:01.313 "data_size": 7936 00:23:01.313 }, 00:23:01.313 { 00:23:01.313 "name": "BaseBdev2", 00:23:01.313 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 256, 00:23:01.313 "data_size": 7936 00:23:01.313 } 00:23:01.313 ] 00:23:01.313 }' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.313 "name": "raid_bdev1", 00:23:01.313 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:01.313 "strip_size_kb": 0, 00:23:01.313 "state": "online", 00:23:01.313 "raid_level": "raid1", 00:23:01.313 "superblock": true, 00:23:01.313 "num_base_bdevs": 2, 00:23:01.313 "num_base_bdevs_discovered": 2, 00:23:01.313 "num_base_bdevs_operational": 2, 00:23:01.313 "base_bdevs_list": [ 00:23:01.313 { 00:23:01.313 "name": "spare", 00:23:01.313 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 256, 00:23:01.313 "data_size": 7936 00:23:01.313 }, 00:23:01.313 { 00:23:01.313 "name": "BaseBdev2", 00:23:01.313 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:01.313 "is_configured": true, 00:23:01.313 "data_offset": 256, 00:23:01.313 "data_size": 7936 00:23:01.313 } 00:23:01.313 ] 00:23:01.313 }' 00:23:01.313 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.571 "name": "raid_bdev1", 00:23:01.571 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:01.571 "strip_size_kb": 0, 00:23:01.571 "state": "online", 00:23:01.571 "raid_level": "raid1", 00:23:01.571 "superblock": true, 00:23:01.571 "num_base_bdevs": 2, 00:23:01.571 "num_base_bdevs_discovered": 2, 00:23:01.571 "num_base_bdevs_operational": 2, 00:23:01.571 "base_bdevs_list": [ 00:23:01.571 { 00:23:01.571 "name": "spare", 00:23:01.571 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:01.571 "is_configured": true, 00:23:01.571 "data_offset": 256, 00:23:01.571 "data_size": 7936 00:23:01.571 }, 00:23:01.571 { 00:23:01.571 "name": "BaseBdev2", 00:23:01.571 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:01.571 "is_configured": true, 00:23:01.571 "data_offset": 256, 00:23:01.571 "data_size": 7936 00:23:01.571 } 00:23:01.571 ] 00:23:01.571 }' 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.571 19:42:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 [2024-12-05 19:42:55.367182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:02.151 [2024-12-05 19:42:55.367226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.151 [2024-12-05 19:42:55.367405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.151 [2024-12-05 19:42:55.367500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.151 [2024-12-05 19:42:55.367516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 [2024-12-05 19:42:55.447166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:02.151 [2024-12-05 19:42:55.447228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.151 [2024-12-05 19:42:55.447260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:02.151 [2024-12-05 19:42:55.447274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.151 [2024-12-05 19:42:55.450008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.151 [2024-12-05 19:42:55.450053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:02.151 [2024-12-05 19:42:55.450127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:02.151 [2024-12-05 19:42:55.450188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.151 [2024-12-05 19:42:55.450362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.151 spare 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 [2024-12-05 19:42:55.550469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:02.151 [2024-12-05 19:42:55.550518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:02.151 [2024-12-05 19:42:55.550636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:02.151 [2024-12-05 19:42:55.550777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:02.151 [2024-12-05 19:42:55.550809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:02.151 [2024-12-05 19:42:55.550923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.408 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.408 "name": "raid_bdev1", 00:23:02.408 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:02.408 "strip_size_kb": 0, 00:23:02.408 "state": "online", 00:23:02.408 "raid_level": "raid1", 00:23:02.408 "superblock": true, 00:23:02.408 "num_base_bdevs": 2, 00:23:02.408 "num_base_bdevs_discovered": 2, 00:23:02.408 "num_base_bdevs_operational": 2, 00:23:02.408 "base_bdevs_list": [ 00:23:02.408 { 00:23:02.408 "name": "spare", 00:23:02.408 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:02.408 "is_configured": true, 00:23:02.408 "data_offset": 256, 00:23:02.408 "data_size": 7936 00:23:02.408 }, 00:23:02.408 { 00:23:02.408 "name": "BaseBdev2", 00:23:02.408 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:02.408 "is_configured": true, 00:23:02.408 "data_offset": 256, 00:23:02.408 "data_size": 7936 00:23:02.408 } 00:23:02.408 ] 00:23:02.408 }' 00:23:02.408 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.408 19:42:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.664 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:02.921 "name": "raid_bdev1", 00:23:02.921 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:02.921 "strip_size_kb": 0, 00:23:02.921 "state": "online", 00:23:02.921 "raid_level": "raid1", 00:23:02.921 "superblock": true, 00:23:02.921 "num_base_bdevs": 2, 00:23:02.921 "num_base_bdevs_discovered": 2, 00:23:02.921 "num_base_bdevs_operational": 2, 00:23:02.921 "base_bdevs_list": [ 00:23:02.921 { 00:23:02.921 "name": "spare", 00:23:02.921 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:02.921 "is_configured": true, 00:23:02.921 "data_offset": 256, 00:23:02.921 "data_size": 7936 00:23:02.921 }, 00:23:02.921 { 00:23:02.921 "name": "BaseBdev2", 00:23:02.921 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:02.921 "is_configured": true, 00:23:02.921 "data_offset": 256, 00:23:02.921 "data_size": 7936 00:23:02.921 } 00:23:02.921 ] 00:23:02.921 }' 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.921 [2024-12-05 19:42:56.311634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.921 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.179 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.179 "name": "raid_bdev1", 00:23:03.179 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:03.179 "strip_size_kb": 0, 00:23:03.179 "state": "online", 00:23:03.179 "raid_level": "raid1", 00:23:03.179 "superblock": true, 00:23:03.179 "num_base_bdevs": 2, 00:23:03.179 "num_base_bdevs_discovered": 1, 00:23:03.179 "num_base_bdevs_operational": 1, 00:23:03.179 "base_bdevs_list": [ 00:23:03.179 { 00:23:03.179 "name": null, 00:23:03.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.179 "is_configured": false, 00:23:03.179 "data_offset": 0, 00:23:03.179 "data_size": 7936 00:23:03.179 }, 00:23:03.179 { 00:23:03.179 "name": "BaseBdev2", 00:23:03.179 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:03.179 "is_configured": true, 00:23:03.179 "data_offset": 256, 00:23:03.179 "data_size": 7936 00:23:03.179 } 00:23:03.179 ] 00:23:03.179 }' 00:23:03.179 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.179 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:03.436 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.436 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.436 [2024-12-05 19:42:56.871838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.436 [2024-12-05 19:42:56.872093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.436 [2024-12-05 19:42:56.872120] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:03.436 [2024-12-05 19:42:56.872172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.694 [2024-12-05 19:42:56.888785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:03.694 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.694 19:42:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:03.694 [2024-12-05 19:42:56.891290] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.643 "name": "raid_bdev1", 00:23:04.643 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:04.643 "strip_size_kb": 0, 00:23:04.643 "state": "online", 00:23:04.643 "raid_level": "raid1", 00:23:04.643 "superblock": true, 00:23:04.643 "num_base_bdevs": 2, 00:23:04.643 "num_base_bdevs_discovered": 2, 00:23:04.643 "num_base_bdevs_operational": 2, 00:23:04.643 "process": { 00:23:04.643 "type": "rebuild", 00:23:04.643 "target": "spare", 00:23:04.643 "progress": { 00:23:04.643 "blocks": 2560, 00:23:04.643 "percent": 32 00:23:04.643 } 00:23:04.643 }, 00:23:04.643 "base_bdevs_list": [ 00:23:04.643 { 00:23:04.643 "name": "spare", 00:23:04.643 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:04.643 "is_configured": true, 00:23:04.643 "data_offset": 256, 00:23:04.643 "data_size": 7936 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "name": "BaseBdev2", 00:23:04.643 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:04.643 "is_configured": true, 00:23:04.643 "data_offset": 256, 00:23:04.643 "data_size": 7936 00:23:04.643 } 00:23:04.643 ] 00:23:04.643 }' 00:23:04.643 19:42:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.643 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.643 [2024-12-05 19:42:58.064737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.913 [2024-12-05 19:42:58.100619] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:04.913 [2024-12-05 19:42:58.100915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.913 [2024-12-05 19:42:58.101083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.913 [2024-12-05 19:42:58.101140] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.913 "name": "raid_bdev1", 00:23:04.913 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:04.913 "strip_size_kb": 0, 00:23:04.913 "state": "online", 00:23:04.913 "raid_level": "raid1", 00:23:04.913 "superblock": true, 00:23:04.913 "num_base_bdevs": 2, 00:23:04.913 "num_base_bdevs_discovered": 1, 00:23:04.913 "num_base_bdevs_operational": 1, 00:23:04.913 "base_bdevs_list": [ 00:23:04.913 { 00:23:04.913 "name": null, 00:23:04.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.913 "is_configured": false, 00:23:04.913 "data_offset": 0, 00:23:04.913 "data_size": 7936 00:23:04.913 }, 00:23:04.913 { 00:23:04.913 "name": "BaseBdev2", 00:23:04.913 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:04.913 "is_configured": true, 00:23:04.913 "data_offset": 256, 00:23:04.913 "data_size": 7936 00:23:04.913 } 00:23:04.913 ] 00:23:04.913 }' 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.913 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.480 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:05.480 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.480 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.480 [2024-12-05 19:42:58.655051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.480 [2024-12-05 19:42:58.655276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.480 [2024-12-05 19:42:58.655324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:05.480 [2024-12-05 19:42:58.655345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.480 [2024-12-05 19:42:58.655599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.480 [2024-12-05 19:42:58.655627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.480 [2024-12-05 19:42:58.655722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:05.480 [2024-12-05 19:42:58.655747] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:05.480 [2024-12-05 19:42:58.655761] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:05.480 [2024-12-05 19:42:58.655793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.480 [2024-12-05 19:42:58.671179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:05.480 spare 00:23:05.480 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.480 19:42:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:05.480 [2024-12-05 19:42:58.673720] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.416 "name": "raid_bdev1", 00:23:06.416 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:06.416 "strip_size_kb": 0, 00:23:06.416 "state": "online", 00:23:06.416 "raid_level": "raid1", 00:23:06.416 "superblock": true, 00:23:06.416 "num_base_bdevs": 2, 00:23:06.416 "num_base_bdevs_discovered": 2, 00:23:06.416 "num_base_bdevs_operational": 2, 00:23:06.416 "process": { 00:23:06.416 "type": "rebuild", 00:23:06.416 "target": "spare", 00:23:06.416 "progress": { 00:23:06.416 "blocks": 2560, 00:23:06.416 "percent": 32 00:23:06.416 } 00:23:06.416 }, 00:23:06.416 "base_bdevs_list": [ 00:23:06.416 { 00:23:06.416 "name": "spare", 00:23:06.416 "uuid": "5e0a4cd3-c034-5c6f-bcd3-5f900f8b5b26", 00:23:06.416 "is_configured": true, 00:23:06.416 "data_offset": 256, 00:23:06.416 "data_size": 7936 00:23:06.416 }, 00:23:06.416 { 00:23:06.416 "name": "BaseBdev2", 00:23:06.416 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:06.416 "is_configured": true, 00:23:06.416 "data_offset": 256, 00:23:06.416 "data_size": 7936 00:23:06.416 } 00:23:06.416 ] 00:23:06.416 }' 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.416 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.416 [2024-12-05 19:42:59.834801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.675 [2024-12-05 19:42:59.882741] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:06.675 [2024-12-05 19:42:59.882848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.675 [2024-12-05 19:42:59.882877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.675 [2024-12-05 19:42:59.882889] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.675 "name": "raid_bdev1", 00:23:06.675 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:06.675 "strip_size_kb": 0, 00:23:06.675 "state": "online", 00:23:06.675 "raid_level": "raid1", 00:23:06.675 "superblock": true, 00:23:06.675 "num_base_bdevs": 2, 00:23:06.675 "num_base_bdevs_discovered": 1, 00:23:06.675 "num_base_bdevs_operational": 1, 00:23:06.675 "base_bdevs_list": [ 00:23:06.675 { 00:23:06.675 "name": null, 00:23:06.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.675 "is_configured": false, 00:23:06.675 "data_offset": 0, 00:23:06.675 "data_size": 7936 00:23:06.675 }, 00:23:06.675 { 00:23:06.675 "name": "BaseBdev2", 00:23:06.675 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:06.675 "is_configured": true, 00:23:06.675 "data_offset": 256, 00:23:06.675 "data_size": 7936 00:23:06.675 } 00:23:06.675 ] 00:23:06.675 }' 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.675 19:42:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.242 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.242 "name": "raid_bdev1", 00:23:07.242 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:07.242 "strip_size_kb": 0, 00:23:07.242 "state": "online", 00:23:07.242 "raid_level": "raid1", 00:23:07.242 "superblock": true, 00:23:07.242 "num_base_bdevs": 2, 00:23:07.242 "num_base_bdevs_discovered": 1, 00:23:07.242 "num_base_bdevs_operational": 1, 00:23:07.242 "base_bdevs_list": [ 00:23:07.242 { 00:23:07.242 "name": null, 00:23:07.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.242 "is_configured": false, 00:23:07.242 "data_offset": 0, 00:23:07.242 "data_size": 7936 00:23:07.242 }, 00:23:07.242 { 00:23:07.243 "name": "BaseBdev2", 00:23:07.243 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:07.243 "is_configured": true, 00:23:07.243 "data_offset": 256, 00:23:07.243 "data_size": 7936 00:23:07.243 } 00:23:07.243 ] 00:23:07.243 }' 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.243 [2024-12-05 19:43:00.602804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:07.243 [2024-12-05 19:43:00.602877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.243 [2024-12-05 19:43:00.602910] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:07.243 [2024-12-05 19:43:00.602925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.243 [2024-12-05 19:43:00.603152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.243 [2024-12-05 19:43:00.603176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:07.243 [2024-12-05 19:43:00.603244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:07.243 [2024-12-05 19:43:00.603264] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:07.243 [2024-12-05 19:43:00.603278] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:07.243 [2024-12-05 19:43:00.603292] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:07.243 BaseBdev1 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.243 19:43:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.179 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.455 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.456 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.456 "name": "raid_bdev1", 00:23:08.456 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:08.456 "strip_size_kb": 0, 00:23:08.456 "state": "online", 00:23:08.456 "raid_level": "raid1", 00:23:08.456 "superblock": true, 00:23:08.456 "num_base_bdevs": 2, 00:23:08.456 "num_base_bdevs_discovered": 1, 00:23:08.456 "num_base_bdevs_operational": 1, 00:23:08.456 "base_bdevs_list": [ 00:23:08.456 { 00:23:08.456 "name": null, 00:23:08.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.456 "is_configured": false, 00:23:08.456 "data_offset": 0, 00:23:08.456 "data_size": 7936 00:23:08.456 }, 00:23:08.456 { 00:23:08.456 "name": "BaseBdev2", 00:23:08.456 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:08.456 "is_configured": true, 00:23:08.456 "data_offset": 256, 00:23:08.456 "data_size": 7936 00:23:08.456 } 00:23:08.456 ] 00:23:08.456 }' 00:23:08.456 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.456 19:43:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.715 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.974 "name": "raid_bdev1", 00:23:08.974 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:08.974 "strip_size_kb": 0, 00:23:08.974 "state": "online", 00:23:08.974 "raid_level": "raid1", 00:23:08.974 "superblock": true, 00:23:08.974 "num_base_bdevs": 2, 00:23:08.974 "num_base_bdevs_discovered": 1, 00:23:08.974 "num_base_bdevs_operational": 1, 00:23:08.974 "base_bdevs_list": [ 00:23:08.974 { 00:23:08.974 "name": null, 00:23:08.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.974 "is_configured": false, 00:23:08.974 "data_offset": 0, 00:23:08.974 "data_size": 7936 00:23:08.974 }, 00:23:08.974 { 00:23:08.974 "name": "BaseBdev2", 00:23:08.974 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:08.974 "is_configured": true, 00:23:08.974 "data_offset": 256, 00:23:08.974 "data_size": 7936 00:23:08.974 } 00:23:08.974 ] 00:23:08.974 }' 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.974 [2024-12-05 19:43:02.299390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.974 [2024-12-05 19:43:02.299586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:08.974 [2024-12-05 19:43:02.299613] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:08.974 request: 00:23:08.974 { 00:23:08.974 "base_bdev": "BaseBdev1", 00:23:08.974 "raid_bdev": "raid_bdev1", 00:23:08.974 "method": "bdev_raid_add_base_bdev", 00:23:08.974 "req_id": 1 00:23:08.974 } 00:23:08.974 Got JSON-RPC error response 00:23:08.974 response: 00:23:08.974 { 00:23:08.974 "code": -22, 00:23:08.974 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:08.974 } 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.974 19:43:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.911 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.170 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.170 "name": "raid_bdev1", 00:23:10.170 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:10.170 "strip_size_kb": 0, 00:23:10.170 "state": "online", 00:23:10.170 "raid_level": "raid1", 00:23:10.170 "superblock": true, 00:23:10.170 "num_base_bdevs": 2, 00:23:10.170 "num_base_bdevs_discovered": 1, 00:23:10.170 "num_base_bdevs_operational": 1, 00:23:10.170 "base_bdevs_list": [ 00:23:10.170 { 00:23:10.170 "name": null, 00:23:10.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.170 "is_configured": false, 00:23:10.170 "data_offset": 0, 00:23:10.170 "data_size": 7936 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "name": "BaseBdev2", 00:23:10.170 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:10.170 "is_configured": true, 00:23:10.170 "data_offset": 256, 00:23:10.170 "data_size": 7936 00:23:10.170 } 00:23:10.170 ] 00:23:10.170 }' 00:23:10.170 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.170 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.429 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.688 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.688 "name": "raid_bdev1", 00:23:10.688 "uuid": "133e13ac-c682-4db9-9f9c-f640ebea43a4", 00:23:10.688 "strip_size_kb": 0, 00:23:10.688 "state": "online", 00:23:10.688 "raid_level": "raid1", 00:23:10.688 "superblock": true, 00:23:10.688 "num_base_bdevs": 2, 00:23:10.688 "num_base_bdevs_discovered": 1, 00:23:10.688 "num_base_bdevs_operational": 1, 00:23:10.688 "base_bdevs_list": [ 00:23:10.688 { 00:23:10.688 "name": null, 00:23:10.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.688 "is_configured": false, 00:23:10.688 "data_offset": 0, 00:23:10.688 "data_size": 7936 00:23:10.688 }, 00:23:10.688 { 00:23:10.688 "name": "BaseBdev2", 00:23:10.688 "uuid": "76d397eb-2888-5cc1-b720-be6cf8089075", 00:23:10.688 "is_configured": true, 00:23:10.688 "data_offset": 256, 00:23:10.688 "data_size": 7936 00:23:10.688 } 00:23:10.688 ] 00:23:10.688 }' 00:23:10.688 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.688 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:10.688 19:43:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89520 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89520 ']' 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89520 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89520 00:23:10.688 killing process with pid 89520 00:23:10.688 Received shutdown signal, test time was about 60.000000 seconds 00:23:10.688 00:23:10.688 Latency(us) 00:23:10.688 [2024-12-05T19:43:04.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.688 [2024-12-05T19:43:04.129Z] =================================================================================================================== 00:23:10.688 [2024-12-05T19:43:04.129Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89520' 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89520 00:23:10.688 19:43:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89520 00:23:10.688 [2024-12-05 19:43:04.040694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:10.688 [2024-12-05 19:43:04.040862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.688 [2024-12-05 19:43:04.040929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.688 [2024-12-05 19:43:04.040948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:10.947 [2024-12-05 19:43:04.313501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.340 19:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:12.340 00:23:12.340 real 0m18.773s 00:23:12.340 user 0m25.703s 00:23:12.340 sys 0m1.450s 00:23:12.340 ************************************ 00:23:12.340 END TEST raid_rebuild_test_sb_md_interleaved 00:23:12.340 ************************************ 00:23:12.340 19:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.340 19:43:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.340 19:43:05 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:12.340 19:43:05 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:12.340 19:43:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89520 ']' 00:23:12.340 19:43:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89520 00:23:12.340 19:43:05 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:12.340 ************************************ 00:23:12.340 END TEST bdev_raid 00:23:12.340 ************************************ 00:23:12.340 00:23:12.340 real 13m9.050s 00:23:12.340 user 18m31.121s 00:23:12.340 sys 1m48.587s 00:23:12.340 19:43:05 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.340 19:43:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.340 19:43:05 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.340 19:43:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:12.340 19:43:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.340 19:43:05 -- common/autotest_common.sh@10 -- # set +x 00:23:12.340 ************************************ 00:23:12.340 START TEST spdkcli_raid 00:23:12.340 ************************************ 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.340 * Looking for test storage... 00:23:12.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.340 19:43:05 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.340 19:43:05 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:12.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.340 --rc genhtml_branch_coverage=1 00:23:12.340 --rc genhtml_function_coverage=1 00:23:12.340 --rc genhtml_legend=1 00:23:12.340 --rc geninfo_all_blocks=1 00:23:12.340 --rc geninfo_unexecuted_blocks=1 00:23:12.340 00:23:12.341 ' 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.341 --rc genhtml_branch_coverage=1 00:23:12.341 --rc genhtml_function_coverage=1 00:23:12.341 --rc genhtml_legend=1 00:23:12.341 --rc geninfo_all_blocks=1 00:23:12.341 --rc geninfo_unexecuted_blocks=1 00:23:12.341 00:23:12.341 ' 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.341 --rc genhtml_branch_coverage=1 00:23:12.341 --rc genhtml_function_coverage=1 00:23:12.341 --rc genhtml_legend=1 00:23:12.341 --rc geninfo_all_blocks=1 00:23:12.341 --rc geninfo_unexecuted_blocks=1 00:23:12.341 00:23:12.341 ' 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:12.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.341 --rc genhtml_branch_coverage=1 00:23:12.341 --rc genhtml_function_coverage=1 00:23:12.341 --rc genhtml_legend=1 00:23:12.341 --rc geninfo_all_blocks=1 00:23:12.341 --rc geninfo_unexecuted_blocks=1 00:23:12.341 00:23:12.341 ' 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:12.341 19:43:05 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90203 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:12.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.341 19:43:05 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90203 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90203 ']' 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.341 19:43:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.599 [2024-12-05 19:43:05.816137] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:12.599 [2024-12-05 19:43:05.817048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90203 ] 00:23:12.599 [2024-12-05 19:43:06.015186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:12.856 [2024-12-05 19:43:06.148236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.856 [2024-12-05 19:43:06.148253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:23:13.788 19:43:07 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.788 19:43:07 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.788 19:43:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.788 19:43:07 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:13.788 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:13.788 ' 00:23:15.679 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:15.679 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:15.679 19:43:08 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:15.679 19:43:08 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.679 19:43:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.679 19:43:08 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:15.679 19:43:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.679 19:43:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.679 19:43:08 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:15.679 ' 00:23:16.610 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:23:16.610 19:43:09 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:23:16.610 19:43:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.610 19:43:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.610 19:43:10 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:23:16.610 19:43:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.610 19:43:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.610 19:43:10 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:23:16.610 19:43:10 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:23:17.174 19:43:10 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:23:17.432 19:43:10 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:23:17.432 19:43:10 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:23:17.432 19:43:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.432 19:43:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.432 19:43:10 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:23:17.432 19:43:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.432 19:43:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.432 19:43:10 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:23:17.432 ' 00:23:18.393 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:23:18.393 19:43:11 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:23:18.393 19:43:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.393 19:43:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.653 19:43:11 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:23:18.653 19:43:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.653 19:43:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.653 19:43:11 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:23:18.653 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:23:18.653 ' 00:23:20.032 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:23:20.032 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:23:20.032 19:43:13 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.032 19:43:13 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90203 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90203 ']' 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90203 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:23:20.032 19:43:13 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.033 19:43:13 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90203 00:23:20.290 19:43:13 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.290 killing process with pid 90203 00:23:20.290 19:43:13 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.290 19:43:13 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90203' 00:23:20.290 19:43:13 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90203 00:23:20.290 19:43:13 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90203 00:23:22.817 Process with pid 90203 is not found 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90203 ']' 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90203 00:23:22.817 19:43:15 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90203 ']' 00:23:22.817 19:43:15 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90203 00:23:22.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90203) - No such process 00:23:22.817 19:43:15 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90203 is not found' 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:22.817 19:43:15 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:22.817 ************************************ 00:23:22.817 END TEST spdkcli_raid 00:23:22.817 ************************************ 00:23:22.817 00:23:22.817 real 0m10.234s 00:23:22.817 user 0m21.348s 00:23:22.817 sys 0m1.071s 00:23:22.817 19:43:15 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.817 19:43:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.817 19:43:15 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:22.817 19:43:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.817 19:43:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.817 19:43:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.817 ************************************ 00:23:22.817 START TEST blockdev_raid5f 00:23:22.817 ************************************ 00:23:22.817 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:22.817 * Looking for test storage... 00:23:22.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:22.817 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:22.817 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:22.817 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:23:22.817 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.817 19:43:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.818 19:43:15 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:22.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.818 --rc genhtml_branch_coverage=1 00:23:22.818 --rc genhtml_function_coverage=1 00:23:22.818 --rc genhtml_legend=1 00:23:22.818 --rc geninfo_all_blocks=1 00:23:22.818 --rc geninfo_unexecuted_blocks=1 00:23:22.818 00:23:22.818 ' 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:22.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.818 --rc genhtml_branch_coverage=1 00:23:22.818 --rc genhtml_function_coverage=1 00:23:22.818 --rc genhtml_legend=1 00:23:22.818 --rc geninfo_all_blocks=1 00:23:22.818 --rc geninfo_unexecuted_blocks=1 00:23:22.818 00:23:22.818 ' 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:22.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.818 --rc genhtml_branch_coverage=1 00:23:22.818 --rc genhtml_function_coverage=1 00:23:22.818 --rc genhtml_legend=1 00:23:22.818 --rc geninfo_all_blocks=1 00:23:22.818 --rc geninfo_unexecuted_blocks=1 00:23:22.818 00:23:22.818 ' 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:22.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.818 --rc genhtml_branch_coverage=1 00:23:22.818 --rc genhtml_function_coverage=1 00:23:22.818 --rc genhtml_legend=1 00:23:22.818 --rc geninfo_all_blocks=1 00:23:22.818 --rc geninfo_unexecuted_blocks=1 00:23:22.818 00:23:22.818 ' 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90483 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:22.818 19:43:15 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90483 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90483 ']' 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.818 19:43:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:22.818 [2024-12-05 19:43:16.095532] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:22.818 [2024-12-05 19:43:16.096023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90483 ] 00:23:23.077 [2024-12-05 19:43:16.283737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.077 [2024-12-05 19:43:16.418231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.013 Malloc0 00:23:24.013 Malloc1 00:23:24.013 Malloc2 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.013 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.013 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:23:24.014 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.014 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aade1187-ead2-42f9-bcdc-cd7ae86f0f34"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aade1187-ead2-42f9-bcdc-cd7ae86f0f34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aade1187-ead2-42f9-bcdc-cd7ae86f0f34",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "77f493f9-dcee-44c2-bc12-9f4f42b06496",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "41368f71-4fa0-46a6-8dc3-5e626ff614a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e523381c-9433-4d17-aeac-ff2fbfa17ddd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:23:24.273 19:43:17 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90483 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90483 ']' 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90483 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90483 00:23:24.273 killing process with pid 90483 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90483' 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90483 00:23:24.273 19:43:17 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90483 00:23:27.558 19:43:20 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:27.558 19:43:20 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:27.558 19:43:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:27.558 19:43:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.558 19:43:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:27.558 ************************************ 00:23:27.558 START TEST bdev_hello_world 00:23:27.558 ************************************ 00:23:27.558 19:43:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:27.558 [2024-12-05 19:43:20.581717] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:27.559 [2024-12-05 19:43:20.581893] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90549 ] 00:23:27.559 [2024-12-05 19:43:20.767156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.559 [2024-12-05 19:43:20.914760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.148 [2024-12-05 19:43:21.494184] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:28.148 [2024-12-05 19:43:21.494273] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:23:28.148 [2024-12-05 19:43:21.494300] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:28.148 [2024-12-05 19:43:21.494908] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:28.148 [2024-12-05 19:43:21.495087] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:28.148 [2024-12-05 19:43:21.495122] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:28.148 [2024-12-05 19:43:21.495197] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:28.148 00:23:28.148 [2024-12-05 19:43:21.495229] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:29.523 00:23:29.523 real 0m2.441s 00:23:29.523 user 0m1.959s 00:23:29.523 sys 0m0.356s 00:23:29.523 19:43:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.523 19:43:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:29.523 ************************************ 00:23:29.523 END TEST bdev_hello_world 00:23:29.523 ************************************ 00:23:29.523 19:43:22 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:29.523 19:43:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.523 19:43:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.523 19:43:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.782 ************************************ 00:23:29.782 START TEST bdev_bounds 00:23:29.782 ************************************ 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:29.782 Process bdevio pid: 90597 00:23:29.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90597 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90597' 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90597 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90597 ']' 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.782 19:43:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:29.782 [2024-12-05 19:43:23.066486] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:29.782 [2024-12-05 19:43:23.066993] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90597 ] 00:23:30.040 [2024-12-05 19:43:23.244816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.040 [2024-12-05 19:43:23.400033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.040 [2024-12-05 19:43:23.400146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.040 [2024-12-05 19:43:23.400171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.973 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.973 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:30.973 19:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:30.973 I/O targets: 00:23:30.973 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:23:30.973 00:23:30.973 00:23:30.973 CUnit - A unit testing framework for C - Version 2.1-3 00:23:30.973 http://cunit.sourceforge.net/ 00:23:30.973 00:23:30.973 00:23:30.973 Suite: bdevio tests on: raid5f 00:23:30.973 Test: blockdev write read block ...passed 00:23:30.973 Test: blockdev write zeroes read block ...passed 00:23:30.973 Test: blockdev write zeroes read no split ...passed 00:23:30.973 Test: blockdev write zeroes read split ...passed 00:23:31.232 Test: blockdev write zeroes read split partial ...passed 00:23:31.232 Test: blockdev reset ...passed 00:23:31.232 Test: blockdev write read 8 blocks ...passed 00:23:31.232 Test: blockdev write read size > 128k ...passed 00:23:31.232 Test: blockdev write read invalid size ...passed 00:23:31.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:31.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:31.232 Test: blockdev write read max offset ...passed 00:23:31.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:31.232 Test: blockdev writev readv 8 blocks ...passed 00:23:31.232 Test: blockdev writev readv 30 x 1block ...passed 00:23:31.232 Test: blockdev writev readv block ...passed 00:23:31.232 Test: blockdev writev readv size > 128k ...passed 00:23:31.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:31.232 Test: blockdev comparev and writev ...passed 00:23:31.232 Test: blockdev nvme passthru rw ...passed 00:23:31.232 Test: blockdev nvme passthru vendor specific ...passed 00:23:31.232 Test: blockdev nvme admin passthru ...passed 00:23:31.232 Test: blockdev copy ...passed 00:23:31.232 00:23:31.232 Run Summary: Type Total Ran Passed Failed Inactive 00:23:31.232 suites 1 1 n/a 0 0 00:23:31.232 tests 23 23 23 0 0 00:23:31.232 asserts 130 130 130 0 n/a 00:23:31.232 00:23:31.232 Elapsed time = 0.621 seconds 00:23:31.232 0 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90597 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90597 ']' 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90597 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90597 00:23:31.232 killing process with pid 90597 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90597' 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90597 00:23:31.232 19:43:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90597 00:23:32.606 ************************************ 00:23:32.606 END TEST bdev_bounds 00:23:32.606 ************************************ 00:23:32.606 19:43:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:32.606 00:23:32.606 real 0m3.009s 00:23:32.606 user 0m7.575s 00:23:32.606 sys 0m0.490s 00:23:32.606 19:43:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.606 19:43:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:32.606 19:43:26 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:32.606 19:43:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:32.606 19:43:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.606 19:43:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:32.606 ************************************ 00:23:32.606 START TEST bdev_nbd 00:23:32.606 ************************************ 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:32.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90658 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90658 /var/tmp/spdk-nbd.sock 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90658 ']' 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.606 19:43:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:32.865 [2024-12-05 19:43:26.137980] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:32.865 [2024-12-05 19:43:26.138167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.123 [2024-12-05 19:43:26.314651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.123 [2024-12-05 19:43:26.462203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:33.691 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:34.258 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.259 1+0 records in 00:23:34.259 1+0 records out 00:23:34.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358151 s, 11.4 MB/s 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:34.259 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:34.520 { 00:23:34.520 "nbd_device": "/dev/nbd0", 00:23:34.520 "bdev_name": "raid5f" 00:23:34.520 } 00:23:34.520 ]' 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:34.520 { 00:23:34.520 "nbd_device": "/dev/nbd0", 00:23:34.520 "bdev_name": "raid5f" 00:23:34.520 } 00:23:34.520 ]' 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:34.520 19:43:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.786 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.354 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:23:35.613 /dev/nbd0 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.613 1+0 records in 00:23:35.613 1+0 records out 00:23:35.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404547 s, 10.1 MB/s 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:35.613 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.614 19:43:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:35.874 { 00:23:35.874 "nbd_device": "/dev/nbd0", 00:23:35.874 "bdev_name": "raid5f" 00:23:35.874 } 00:23:35.874 ]' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:35.874 { 00:23:35.874 "nbd_device": "/dev/nbd0", 00:23:35.874 "bdev_name": "raid5f" 00:23:35.874 } 00:23:35.874 ]' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:35.874 256+0 records in 00:23:35.874 256+0 records out 00:23:35.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815656 s, 129 MB/s 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:35.874 256+0 records in 00:23:35.874 256+0 records out 00:23:35.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040473 s, 25.9 MB/s 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.874 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.442 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:36.700 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:36.700 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:36.700 19:43:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:36.700 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:36.958 malloc_lvol_verify 00:23:36.958 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:37.217 c11e5502-8841-4a89-83c9-0a7f38023538 00:23:37.217 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:37.475 e1909a3a-0d5c-4461-9810-8d3ca82fd8a8 00:23:37.734 19:43:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:37.993 /dev/nbd0 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:37.993 mke2fs 1.47.0 (5-Feb-2023) 00:23:37.993 Discarding device blocks: 0/4096 done 00:23:37.993 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:37.993 00:23:37.993 Allocating group tables: 0/1 done 00:23:37.993 Writing inode tables: 0/1 done 00:23:37.993 Creating journal (1024 blocks): done 00:23:37.993 Writing superblocks and filesystem accounting information: 0/1 done 00:23:37.993 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.993 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90658 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90658 ']' 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90658 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90658 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.252 killing process with pid 90658 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90658' 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90658 00:23:38.252 19:43:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90658 00:23:39.629 19:43:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:39.629 00:23:39.629 real 0m7.033s 00:23:39.629 user 0m10.075s 00:23:39.629 sys 0m1.561s 00:23:39.629 19:43:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.629 ************************************ 00:23:39.629 END TEST bdev_nbd 00:23:39.629 ************************************ 00:23:39.629 19:43:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:39.887 19:43:33 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:39.887 19:43:33 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:23:39.887 19:43:33 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:23:39.887 19:43:33 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:39.887 19:43:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.888 19:43:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.888 19:43:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:39.888 ************************************ 00:23:39.888 START TEST bdev_fio 00:23:39.888 ************************************ 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:39.888 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:39.888 ************************************ 00:23:39.888 START TEST bdev_fio_rw_verify 00:23:39.888 ************************************ 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:39.888 19:43:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:40.147 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:40.147 fio-3.35 00:23:40.147 Starting 1 thread 00:23:52.343 00:23:52.343 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90875: Thu Dec 5 19:43:44 2024 00:23:52.343 read: IOPS=7987, BW=31.2MiB/s (32.7MB/s)(312MiB/10001msec) 00:23:52.343 slat (usec): min=26, max=212, avg=31.83, stdev= 5.60 00:23:52.343 clat (usec): min=15, max=542, avg=200.57, stdev=75.91 00:23:52.343 lat (usec): min=47, max=606, avg=232.39, stdev=76.70 00:23:52.343 clat percentiles (usec): 00:23:52.343 | 50.000th=[ 204], 99.000th=[ 347], 99.900th=[ 420], 99.990th=[ 490], 00:23:52.343 | 99.999th=[ 545] 00:23:52.343 write: IOPS=8409, BW=32.8MiB/s (34.4MB/s)(325MiB/9890msec); 0 zone resets 00:23:52.343 slat (usec): min=12, max=313, avg=24.22, stdev= 6.23 00:23:52.343 clat (usec): min=86, max=1154, avg=457.00, stdev=60.84 00:23:52.344 lat (usec): min=108, max=1468, avg=481.22, stdev=62.65 00:23:52.344 clat percentiles (usec): 00:23:52.344 | 50.000th=[ 461], 99.000th=[ 635], 99.900th=[ 783], 99.990th=[ 971], 00:23:52.344 | 99.999th=[ 1156] 00:23:52.344 bw ( KiB/s): min=29944, max=35960, per=98.79%, avg=33231.16, stdev=1464.36, samples=19 00:23:52.344 iops : min= 7486, max= 8990, avg=8307.79, stdev=366.09, samples=19 00:23:52.344 lat (usec) : 20=0.01%, 100=5.62%, 250=28.74%, 500=54.81%, 750=10.75% 00:23:52.344 lat (usec) : 1000=0.08% 00:23:52.344 lat (msec) : 2=0.01% 00:23:52.344 cpu : usr=98.50%, sys=0.55%, ctx=23, majf=0, minf=7047 00:23:52.344 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.344 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.344 issued rwts: total=79880,83166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.344 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:52.344 00:23:52.344 Run status group 0 (all jobs): 00:23:52.344 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=312MiB (327MB), run=10001-10001msec 00:23:52.344 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=325MiB (341MB), run=9890-9890msec 00:23:52.912 ----------------------------------------------------- 00:23:52.912 Suppressions used: 00:23:52.912 count bytes template 00:23:52.912 1 7 /usr/src/fio/parse.c 00:23:52.912 786 75456 /usr/src/fio/iolog.c 00:23:52.912 1 8 libtcmalloc_minimal.so 00:23:52.912 1 904 libcrypto.so 00:23:52.912 ----------------------------------------------------- 00:23:52.912 00:23:52.912 00:23:52.912 real 0m12.990s 00:23:52.912 user 0m13.512s 00:23:52.912 sys 0m0.848s 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:52.912 ************************************ 00:23:52.912 END TEST bdev_fio_rw_verify 00:23:52.912 ************************************ 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:52.912 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aade1187-ead2-42f9-bcdc-cd7ae86f0f34"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aade1187-ead2-42f9-bcdc-cd7ae86f0f34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aade1187-ead2-42f9-bcdc-cd7ae86f0f34",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "77f493f9-dcee-44c2-bc12-9f4f42b06496",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "41368f71-4fa0-46a6-8dc3-5e626ff614a7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e523381c-9433-4d17-aeac-ff2fbfa17ddd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.913 /home/vagrant/spdk_repo/spdk 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:52.913 00:23:52.913 real 0m13.220s 00:23:52.913 user 0m13.636s 00:23:52.913 sys 0m0.932s 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.913 19:43:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 ************************************ 00:23:52.913 END TEST bdev_fio 00:23:52.913 ************************************ 00:23:53.172 19:43:46 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:53.172 19:43:46 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:53.172 19:43:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:53.172 19:43:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.172 19:43:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:53.172 ************************************ 00:23:53.172 START TEST bdev_verify 00:23:53.172 ************************************ 00:23:53.172 19:43:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:53.172 [2024-12-05 19:43:46.504683] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:23:53.172 [2024-12-05 19:43:46.504888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91038 ] 00:23:53.431 [2024-12-05 19:43:46.690130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:53.431 [2024-12-05 19:43:46.848780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.431 [2024-12-05 19:43:46.848805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.999 Running I/O for 5 seconds... 00:23:56.314 9254.00 IOPS, 36.15 MiB/s [2024-12-05T19:43:50.693Z] 9203.50 IOPS, 35.95 MiB/s [2024-12-05T19:43:51.630Z] 9262.00 IOPS, 36.18 MiB/s [2024-12-05T19:43:52.569Z] 9206.50 IOPS, 35.96 MiB/s [2024-12-05T19:43:52.569Z] 9220.00 IOPS, 36.02 MiB/s 00:23:59.128 Latency(us) 00:23:59.128 [2024-12-05T19:43:52.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.128 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:59.128 Verification LBA range: start 0x0 length 0x2000 00:23:59.128 raid5f : 5.02 4703.36 18.37 0.00 0.00 41251.72 303.48 33602.09 00:23:59.128 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.128 Verification LBA range: start 0x2000 length 0x2000 00:23:59.129 raid5f : 5.02 4528.42 17.69 0.00 0.00 42764.13 297.89 35270.28 00:23:59.129 [2024-12-05T19:43:52.570Z] =================================================================================================================== 00:23:59.129 [2024-12-05T19:43:52.570Z] Total : 9231.78 36.06 0.00 0.00 41993.18 297.89 35270.28 00:24:00.505 00:24:00.505 real 0m7.435s 00:24:00.505 user 0m13.548s 00:24:00.505 sys 0m0.373s 00:24:00.505 19:43:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.505 19:43:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:00.505 ************************************ 00:24:00.505 END TEST bdev_verify 00:24:00.505 ************************************ 00:24:00.505 19:43:53 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:00.505 19:43:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:00.505 19:43:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.505 19:43:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:00.505 ************************************ 00:24:00.505 START TEST bdev_verify_big_io 00:24:00.505 ************************************ 00:24:00.505 19:43:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:00.763 [2024-12-05 19:43:53.978863] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:00.763 [2024-12-05 19:43:53.979010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91131 ] 00:24:00.763 [2024-12-05 19:43:54.154709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.022 [2024-12-05 19:43:54.302682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.022 [2024-12-05 19:43:54.302723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.589 Running I/O for 5 seconds... 00:24:03.462 506.00 IOPS, 31.62 MiB/s [2024-12-05T19:43:58.278Z] 507.00 IOPS, 31.69 MiB/s [2024-12-05T19:43:59.290Z] 590.67 IOPS, 36.92 MiB/s [2024-12-05T19:44:00.221Z] 602.00 IOPS, 37.62 MiB/s [2024-12-05T19:44:00.221Z] 621.60 IOPS, 38.85 MiB/s 00:24:06.780 Latency(us) 00:24:06.780 [2024-12-05T19:44:00.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.780 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:06.780 Verification LBA range: start 0x0 length 0x200 00:24:06.780 raid5f : 5.21 328.49 20.53 0.00 0.00 9721001.37 202.01 440401.92 00:24:06.780 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:06.780 Verification LBA range: start 0x200 length 0x200 00:24:06.780 raid5f : 5.15 320.81 20.05 0.00 0.00 9949009.93 222.49 444214.92 00:24:06.780 [2024-12-05T19:44:00.221Z] =================================================================================================================== 00:24:06.780 [2024-12-05T19:44:00.221Z] Total : 649.30 40.58 0.00 0.00 9832904.50 202.01 444214.92 00:24:08.152 00:24:08.152 real 0m7.674s 00:24:08.152 user 0m14.045s 00:24:08.152 sys 0m0.368s 00:24:08.152 19:44:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.152 19:44:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:08.152 ************************************ 00:24:08.152 END TEST bdev_verify_big_io 00:24:08.152 ************************************ 00:24:08.410 19:44:01 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:08.410 19:44:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:08.410 19:44:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.410 19:44:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.410 ************************************ 00:24:08.410 START TEST bdev_write_zeroes 00:24:08.410 ************************************ 00:24:08.410 19:44:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:08.410 [2024-12-05 19:44:01.726743] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:08.410 [2024-12-05 19:44:01.726898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91231 ] 00:24:08.667 [2024-12-05 19:44:01.903130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.667 [2024-12-05 19:44:02.050947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.234 Running I/O for 1 seconds... 00:24:10.648 19071.00 IOPS, 74.50 MiB/s 00:24:10.648 Latency(us) 00:24:10.648 [2024-12-05T19:44:04.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.648 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:10.648 raid5f : 1.01 19034.45 74.35 0.00 0.00 6696.56 2055.45 9413.35 00:24:10.648 [2024-12-05T19:44:04.089Z] =================================================================================================================== 00:24:10.648 [2024-12-05T19:44:04.089Z] Total : 19034.45 74.35 0.00 0.00 6696.56 2055.45 9413.35 00:24:12.024 00:24:12.024 real 0m3.482s 00:24:12.024 user 0m2.965s 00:24:12.024 sys 0m0.382s 00:24:12.024 19:44:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.024 19:44:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:12.024 ************************************ 00:24:12.024 END TEST bdev_write_zeroes 00:24:12.024 ************************************ 00:24:12.024 19:44:05 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:12.024 19:44:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:12.024 19:44:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.024 19:44:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:12.024 ************************************ 00:24:12.024 START TEST bdev_json_nonenclosed 00:24:12.024 ************************************ 00:24:12.024 19:44:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:12.024 [2024-12-05 19:44:05.241265] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:12.024 [2024-12-05 19:44:05.241425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91283 ] 00:24:12.024 [2024-12-05 19:44:05.416199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.281 [2024-12-05 19:44:05.566488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.281 [2024-12-05 19:44:05.566629] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:12.281 [2024-12-05 19:44:05.566673] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:12.281 [2024-12-05 19:44:05.566690] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:12.539 00:24:12.539 real 0m0.710s 00:24:12.539 user 0m0.460s 00:24:12.539 sys 0m0.143s 00:24:12.539 19:44:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.539 19:44:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:12.539 ************************************ 00:24:12.539 END TEST bdev_json_nonenclosed 00:24:12.539 ************************************ 00:24:12.539 19:44:05 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:12.539 19:44:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:12.539 19:44:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.539 19:44:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:12.539 ************************************ 00:24:12.539 START TEST bdev_json_nonarray 00:24:12.539 ************************************ 00:24:12.539 19:44:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:12.797 [2024-12-05 19:44:06.006144] Starting SPDK v25.01-pre git sha1 98eca6fa0 / DPDK 24.03.0 initialization... 00:24:12.797 [2024-12-05 19:44:06.006333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91310 ] 00:24:12.797 [2024-12-05 19:44:06.182297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.055 [2024-12-05 19:44:06.329741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.055 [2024-12-05 19:44:06.329885] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:13.055 [2024-12-05 19:44:06.329917] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:13.055 [2024-12-05 19:44:06.329947] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:13.314 00:24:13.314 real 0m0.716s 00:24:13.314 user 0m0.476s 00:24:13.314 sys 0m0.134s 00:24:13.314 19:44:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.314 19:44:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:13.314 ************************************ 00:24:13.314 END TEST bdev_json_nonarray 00:24:13.314 ************************************ 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:13.314 19:44:06 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:13.314 00:24:13.314 real 0m50.923s 00:24:13.314 user 1m9.549s 00:24:13.314 sys 0m5.699s 00:24:13.314 19:44:06 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.314 19:44:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:13.314 ************************************ 00:24:13.314 END TEST blockdev_raid5f 00:24:13.314 ************************************ 00:24:13.314 19:44:06 -- spdk/autotest.sh@194 -- # uname -s 00:24:13.314 19:44:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:13.314 19:44:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:13.314 19:44:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:13.314 19:44:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:13.314 19:44:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:13.314 19:44:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:13.314 19:44:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.314 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:13.593 19:44:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:13.593 19:44:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:13.593 19:44:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:13.593 19:44:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:13.593 19:44:06 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:13.593 19:44:06 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:13.593 19:44:06 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:13.593 19:44:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.593 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:13.593 19:44:06 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:13.593 19:44:06 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:13.593 19:44:06 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:13.593 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:15.507 INFO: APP EXITING 00:24:15.507 INFO: killing all VMs 00:24:15.507 INFO: killing vhost app 00:24:15.507 INFO: EXIT DONE 00:24:15.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:15.507 Waiting for block devices as requested 00:24:15.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.766 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:16.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:16.332 Cleaning 00:24:16.332 Removing: /var/run/dpdk/spdk0/config 00:24:16.332 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:16.332 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:16.332 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:16.332 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:16.332 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:16.332 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:16.332 Removing: /dev/shm/spdk_tgt_trace.pid56817 00:24:16.332 Removing: /var/run/dpdk/spdk0 00:24:16.332 Removing: /var/run/dpdk/spdk_pid56588 00:24:16.332 Removing: /var/run/dpdk/spdk_pid56817 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57052 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57156 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57212 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57340 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57358 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57568 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57673 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57780 00:24:16.332 Removing: /var/run/dpdk/spdk_pid57902 00:24:16.332 Removing: /var/run/dpdk/spdk_pid58010 00:24:16.332 Removing: /var/run/dpdk/spdk_pid58044 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58086 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58161 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58269 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58744 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58821 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58895 00:24:16.590 Removing: /var/run/dpdk/spdk_pid58911 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59066 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59088 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59237 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59257 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59327 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59345 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59409 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59432 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59633 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59664 00:24:16.590 Removing: /var/run/dpdk/spdk_pid59753 00:24:16.590 Removing: /var/run/dpdk/spdk_pid61137 00:24:16.590 Removing: /var/run/dpdk/spdk_pid61347 00:24:16.590 Removing: /var/run/dpdk/spdk_pid61494 00:24:16.590 Removing: /var/run/dpdk/spdk_pid62154 00:24:16.590 Removing: /var/run/dpdk/spdk_pid62369 00:24:16.590 Removing: /var/run/dpdk/spdk_pid62511 00:24:16.590 Removing: /var/run/dpdk/spdk_pid63171 00:24:16.590 Removing: /var/run/dpdk/spdk_pid63508 00:24:16.590 Removing: /var/run/dpdk/spdk_pid63653 00:24:16.590 Removing: /var/run/dpdk/spdk_pid65066 00:24:16.590 Removing: /var/run/dpdk/spdk_pid65330 00:24:16.590 Removing: /var/run/dpdk/spdk_pid65476 00:24:16.590 Removing: /var/run/dpdk/spdk_pid66883 00:24:16.590 Removing: /var/run/dpdk/spdk_pid67147 00:24:16.590 Removing: /var/run/dpdk/spdk_pid67293 00:24:16.590 Removing: /var/run/dpdk/spdk_pid68706 00:24:16.590 Removing: /var/run/dpdk/spdk_pid69157 00:24:16.590 Removing: /var/run/dpdk/spdk_pid69303 00:24:16.590 Removing: /var/run/dpdk/spdk_pid70818 00:24:16.590 Removing: /var/run/dpdk/spdk_pid71090 00:24:16.590 Removing: /var/run/dpdk/spdk_pid71237 00:24:16.590 Removing: /var/run/dpdk/spdk_pid72750 00:24:16.590 Removing: /var/run/dpdk/spdk_pid73015 00:24:16.590 Removing: /var/run/dpdk/spdk_pid73166 00:24:16.590 Removing: /var/run/dpdk/spdk_pid74674 00:24:16.591 Removing: /var/run/dpdk/spdk_pid75178 00:24:16.591 Removing: /var/run/dpdk/spdk_pid75318 00:24:16.591 Removing: /var/run/dpdk/spdk_pid75462 00:24:16.591 Removing: /var/run/dpdk/spdk_pid75924 00:24:16.591 Removing: /var/run/dpdk/spdk_pid76697 00:24:16.591 Removing: /var/run/dpdk/spdk_pid77083 00:24:16.591 Removing: /var/run/dpdk/spdk_pid77785 00:24:16.591 Removing: /var/run/dpdk/spdk_pid78278 00:24:16.591 Removing: /var/run/dpdk/spdk_pid79076 00:24:16.591 Removing: /var/run/dpdk/spdk_pid79504 00:24:16.591 Removing: /var/run/dpdk/spdk_pid81503 00:24:16.591 Removing: /var/run/dpdk/spdk_pid81959 00:24:16.591 Removing: /var/run/dpdk/spdk_pid82412 00:24:16.591 Removing: /var/run/dpdk/spdk_pid84540 00:24:16.591 Removing: /var/run/dpdk/spdk_pid85037 00:24:16.591 Removing: /var/run/dpdk/spdk_pid85547 00:24:16.591 Removing: /var/run/dpdk/spdk_pid86621 00:24:16.591 Removing: /var/run/dpdk/spdk_pid86950 00:24:16.591 Removing: /var/run/dpdk/spdk_pid87905 00:24:16.591 Removing: /var/run/dpdk/spdk_pid88235 00:24:16.591 Removing: /var/run/dpdk/spdk_pid89192 00:24:16.591 Removing: /var/run/dpdk/spdk_pid89520 00:24:16.591 Removing: /var/run/dpdk/spdk_pid90203 00:24:16.591 Removing: /var/run/dpdk/spdk_pid90483 00:24:16.591 Removing: /var/run/dpdk/spdk_pid90549 00:24:16.591 Removing: /var/run/dpdk/spdk_pid90597 00:24:16.591 Removing: /var/run/dpdk/spdk_pid90860 00:24:16.591 Removing: /var/run/dpdk/spdk_pid91038 00:24:16.591 Removing: /var/run/dpdk/spdk_pid91131 00:24:16.591 Removing: /var/run/dpdk/spdk_pid91231 00:24:16.591 Removing: /var/run/dpdk/spdk_pid91283 00:24:16.591 Removing: /var/run/dpdk/spdk_pid91310 00:24:16.591 Clean 00:24:16.849 19:44:10 -- common/autotest_common.sh@1453 -- # return 0 00:24:16.849 19:44:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:16.849 19:44:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.849 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:16.849 19:44:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:16.849 19:44:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.849 19:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:16.849 19:44:10 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:16.849 19:44:10 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:16.849 19:44:10 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:16.849 19:44:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:16.849 19:44:10 -- spdk/autotest.sh@398 -- # hostname 00:24:16.849 19:44:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:17.109 geninfo: WARNING: invalid characters removed from testname! 00:24:43.691 19:44:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:46.224 19:44:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:48.823 19:44:41 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:51.356 19:44:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:53.888 19:44:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:56.420 19:44:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.950 19:44:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:58.950 19:44:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:58.950 19:44:52 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:58.950 19:44:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:58.950 19:44:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:58.950 19:44:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:58.950 + [[ -n 5208 ]] 00:24:58.950 + sudo kill 5208 00:24:58.959 [Pipeline] } 00:24:58.972 [Pipeline] // timeout 00:24:58.977 [Pipeline] } 00:24:58.990 [Pipeline] // stage 00:24:58.995 [Pipeline] } 00:24:59.006 [Pipeline] // catchError 00:24:59.013 [Pipeline] stage 00:24:59.015 [Pipeline] { (Stop VM) 00:24:59.026 [Pipeline] sh 00:24:59.305 + vagrant halt 00:25:02.623 ==> default: Halting domain... 00:25:09.201 [Pipeline] sh 00:25:09.515 + vagrant destroy -f 00:25:12.798 ==> default: Removing domain... 00:25:13.068 [Pipeline] sh 00:25:13.348 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:25:13.357 [Pipeline] } 00:25:13.372 [Pipeline] // stage 00:25:13.376 [Pipeline] } 00:25:13.390 [Pipeline] // dir 00:25:13.394 [Pipeline] } 00:25:13.408 [Pipeline] // wrap 00:25:13.415 [Pipeline] } 00:25:13.425 [Pipeline] // catchError 00:25:13.434 [Pipeline] stage 00:25:13.436 [Pipeline] { (Epilogue) 00:25:13.448 [Pipeline] sh 00:25:13.761 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:19.042 [Pipeline] catchError 00:25:19.044 [Pipeline] { 00:25:19.057 [Pipeline] sh 00:25:19.338 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:19.338 Artifacts sizes are good 00:25:19.348 [Pipeline] } 00:25:19.364 [Pipeline] // catchError 00:25:19.374 [Pipeline] archiveArtifacts 00:25:19.379 Archiving artifacts 00:25:19.487 [Pipeline] cleanWs 00:25:19.498 [WS-CLEANUP] Deleting project workspace... 00:25:19.498 [WS-CLEANUP] Deferred wipeout is used... 00:25:19.505 [WS-CLEANUP] done 00:25:19.507 [Pipeline] } 00:25:19.522 [Pipeline] // stage 00:25:19.527 [Pipeline] } 00:25:19.541 [Pipeline] // node 00:25:19.547 [Pipeline] End of Pipeline 00:25:19.583 Finished: SUCCESS